added
stringdate
2025-04-01 04:05:38
2025-04-01 07:14:06
created
timestamp[us]date
2001-10-09 16:19:16
2025-01-01 03:51:31
id
stringlengths
4
10
metadata
dict
source
stringclasses
2 values
text
stringlengths
0
1.61M
2025-04-01T04:35:47.557655
2023-07-10T00:55:39
1795680339
{ "authors": [ "CLAassistant", "IgorDuino" ], "license": "mit", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:11708", "repo": "twitter/twemoji", "url": "https://github.com/twitter/twemoji/pull/621" }
gharchive/pull-request
Update LICENSE update copyright year Thank you for your submission! We really appreciate it. Like many open source projects, we ask that you sign our Contributor License Agreement before we can accept your contribution.You have signed the CLA already but the status is still pending? Let us recheck it.
2025-04-01T04:35:47.562675
2013-03-22T12:25:15
12316917
{ "authors": [ "altwohill", "jyrkij" ], "license": "BSD-3-Clause", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:11709", "repo": "twohill/silverstripe-homepagefordomain", "url": "https://github.com/twohill/silverstripe-homepagefordomain/issues/4" }
gharchive/issue
_t() Deprecation warning Using the latest SS3.0 branch, I get the following error after installing homepagefordomain. The $priority argument to _t() is deprecated, please use module inclusion priorities instead. Called from . Deprecation.php:173 Deprecation::notice(3.0,The $priority argument to _t() is deprecated, please use module inclusion priorities instead,4) i18n.php:1480 i18n::_t(SiteTree.HOMEPAGEFORDOMAIN,Domain(s),50,Listing domains that should be used as homepage) Core.php:360 _t(SiteTree.HOMEPAGEFORDOMAIN,Domain(s),50,Listing domains that should be used as homepage) HomepageForDomainExtension.php:29 Cleaning up old issues. Please feel free to reopen if you feel the issue is still valid!
2025-04-01T04:35:47.563960
2022-06-30T06:11:03
1289645914
{ "authors": [ "connorhsm" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:11710", "repo": "twohoursonelife/junior", "url": "https://github.com/twohoursonelife/junior/issues/12" }
gharchive/issue
Deploy commands on docker container startup Instead of manually deploying commands, we could deploy commands as part of the start-up of the container. This will ensure commands are deployed from the same environment (Node version) that runs the bot. Also, removing the hassle of locally maintaining a node install, only for command deployment. Should take into consideration whether this will affect our rate limits of Discords API. Commands are deployed in the workflow too, so this doesnt really make sense, but would help for dev deployments. Local node is maintained for development anyway, so not going to do this.
2025-04-01T04:35:47.565326
2019-10-30T00:37:45
514318700
{ "authors": [ "dposada", "scrosby" ], "license": "apache-2.0", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:11711", "repo": "twosigma/Cook", "url": "https://github.com/twosigma/Cook/pull/1275" }
gharchive/pull-request
Improve logging as we filter jobs from being considered. Changes proposed in this PR Log as we filter out jobs from being considered Why are we making these changes? Make it easier to determine where and why we cease to consider jobs for scheduling Will merge when green
2025-04-01T04:35:47.566950
2015-03-05T03:37:53
59900582
{ "authors": [ "buob", "scottdraves" ], "license": "apache-2.0", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:11712", "repo": "twosigma/beaker-notebook", "url": "https://github.com/twosigma/beaker-notebook/issues/1242" }
gharchive/issue
Table highlight interrupted by sorted column The selection highlight isn't interrupted, it's just the hover that is, is that still worth overriding that CSS? yes please
2025-04-01T04:35:47.569810
2014-09-29T15:38:05
44307767
{ "authors": [ "scottdraves" ], "license": "apache-2.0", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:11713", "repo": "twosigma/beaker-notebook", "url": "https://github.com/twosigma/beaker-notebook/issues/703" }
gharchive/issue
login shell causes PATH problems on Mac Our method of finding IPython on Mac is to create a login shell during application boot in beaker.command: https://github.com/twosigma/beaker-notebook/blob/master/core/beaker.command#L30 however, if the user sets up their path in the terminal because they use virtualenv, or they don't use bash and so their PATH setup is not in .bash_profile, then this fails. related: https://github.com/twosigma/beaker-notebook/issues/1178 I added a path per plugin to the prefs but left in the login-shell hack on mac because i don't want to require users to edit theirs prefs to access python, since now editing prefs is painful. I think true resolution of this needs to wait until we control the installation of python, or we have some way to guess initial value for prefs. So much of the problem has been solved but it would still be good to remove this hack, it will just require more work, so punting this to the next release (though that's optimistic)
2025-04-01T04:35:47.571223
2017-11-22T03:55:20
275937284
{ "authors": [ "piorek", "scottdraves" ], "license": "apache-2.0", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:11714", "repo": "twosigma/beakerx", "url": "https://github.com/twosigma/beakerx/issues/6371" }
gharchive/issue
get rid of the OK button of our tab http://localhost:8888/tree#beakerx sync continuously. when the values are written to the server, show a yellow spinner that turns into a green check. done
2025-04-01T04:35:47.602339
2020-06-19T11:49:53
641913674
{ "authors": [ "AlfaJackal", "twstokes" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:11715", "repo": "twstokes/healthdata_influx", "url": "https://github.com/twstokes/healthdata_influx/issues/7" }
gharchive/issue
Not able to visualize SleepData How do I visualize my sleep data? As you can see, data is available, but if I try to make a graph, it shows nothing. My guess is that is has something to do with the fact that these sleeping data is formatted as log and not metrics. Would love to show those aswell! Unfortunately I don't have anything to generate "is asleep" data, but I do have some records for "is in bed". Apple has three types for this record documented here: https://developer.apple.com/documentation/healthkit/hkcategoryvaluesleepanalysis I think this is more of an Influx / Grafana issue you're running into where you need to determine how to stitch together data points to make it continuous over time.
2025-04-01T04:35:47.609336
2017-07-11T21:37:13
242191404
{ "authors": [ "codecov-io", "eharney", "txels" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:11716", "repo": "txels/ddt", "url": "https://github.com/txels/ddt/pull/49" }
gharchive/pull-request
Use raw string for regex This issues a DeprecationWarning on python 3.6 due to \W not being a valid escape character in regular strings. Codecov Report Merging #49 into master will not change coverage. The diff coverage is 100%. @@ Coverage Diff @@ ## master #49 +/- ## ===================================== Coverage 100% 100% ===================================== Files 1 1 Lines 107 107 ===================================== Hits 107 107 Impacted Files Coverage Δ ddt.py 100% <100%> (ø) :arrow_up: Continue to review full report at Codecov. Legend - Click here to learn more Δ = absolute <relative> (impact), ø = not affected, ? = missing data Powered by Codecov. Last update 19e9b28...1731907. Read the comment docs. Thanks for spotting and fixing this! Will need to add python 3.6 in .travis.yml file if 3.6 is now an officially supported version.
2025-04-01T04:35:47.632468
2015-04-11T02:24:00
67721816
{ "authors": [ "inaveu", "justin-tang", "txthinking" ], "license": "mit", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:11718", "repo": "txthinking/google-hosts", "url": "https://github.com/txthinking/google-hosts/issues/80" }
gharchive/issue
执行find.sh出错 $ uname -m i686 所以执行find.sh时出现找不到iprange_i686,怎么办?谢谢 试试: $ cd scripts $ cp iprange_i386 iprange_i686 然后再find On Sat, Apr 11, 2015 at 10:24 AM inaveu<EMAIL_ADDRESS>wrote: $ uname -m i686 所以执行find.sh时出现找不到iprange_i686,怎么办?谢谢 — Reply to this email directly or view it on GitHub https://github.com/txthinking/google-hosts/issues/80. @inaveu 可以了吗 On Sat, Apr 11, 2015 at 10:39 AM Cloud<EMAIL_ADDRESS>wrote: 试试: $ cd scripts $ cp iprange_i386 iprange_i686 然后再find On Sat, Apr 11, 2015 at 10:24 AM inaveu<EMAIL_ADDRESS>wrote: $ uname -m i686 所以执行find.sh时出现找不到iprange_i686,怎么办?谢谢 — Reply to this email directly or view it on GitHub https://github.com/txthinking/google-hosts/issues/80. find.sh 出现以下问题请问怎么解决: ./find.sh: line 23: ./iprange_x86_64: cannot execute binary file ./find.sh: line 24: ./iprange_x86_64: cannot execute binary file ./find.sh: line 39: ((: i=: syntax error: operand expected (error token is "=") @justinTang 是Linux系统吗? https://github.com/txthinking/google-hosts/blob/master/scripts/README.md On Sat, Apr 11, 2015 at 1:27 PM Justin Tang<EMAIL_ADDRESS>wrote: find.sh 出现以下问题请问怎么解决: ./find.sh: line 23: ./iprange_x86_64: cannot execute binary file ./find.sh: line 24: ./iprange_x86_64: cannot execute binary file ./find.sh: line 39: ((: i=: syntax error: operand expected (error token is "=") — Reply to this email directly or view it on GitHub https://github.com/txthinking/google-hosts/issues/80#issuecomment-91765316 .
2025-04-01T04:35:47.686179
2024-09-27T04:21:16
2551960085
{ "authors": [ "itcd", "tylerebowers" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:11719", "repo": "tylerebowers/Schwab-API-Python", "url": "https://github.com/tylerebowers/Schwab-API-Python/issues/27" }
gharchive/issue
The refresh token times out 1 day sooner than necessary. On lines 108 to 110 of api.py: if (datetime.datetime.now(datetime.timezone.utc) - self._refresh_token_issued).days >= (self._refresh_token_timeout - 1) or force: # check if we need to update refresh (and access) token print("[Schwabdev] The refresh token has expired, please update!") self._update_refresh_token() For example, if we set _refresh_token_timeout=7 _refresh_token_issued=datetime.datetime.fromisoformat('2024-09-21T03:59:00.000000+00:00') and assume datetime.datetime.now(datetime.timezone.utc) is 2024-09-27 04:00:00.000000+00:00 In this case, only 6 days have passed since the refresh token was issued, meaning it is still valid. However, the condition (datetime.datetime.now(datetime.timezone.utc) - _refresh_token_issued).days >= (_refresh_token_timeout - 1) evaluates to True, which causes the refresh token to timeout 1 day sooner than necessary. Therefore, self._refresh_token_timeout - 1 should be changed to self._refresh_token_timeout on line 108. Releasing new update tonight that will warn the user when the refresh token has less than 12 hours remaining, and will initialize a refresh token update when 1 hour is remaining. This code is a bit legacy from the conversion from TDA where the refresh was valid for 90 days. Changed in PyPI 2.2.5. User will be notified 12 hours before refresh token expires. Refres token update function will be run 1 hour before it expires.
2025-04-01T04:35:47.745282
2021-12-16T12:08:01
1082120248
{ "authors": [ "armanbilge", "bblfish" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:11720", "repo": "typelevel/bobcats", "url": "https://github.com/typelevel/bobcats/issues/38" }
gharchive/issue
public/private key support? Hi, I am implementing the HTTP Signature RFC from IETF over in this repo httpSig. I am extracting a lib there I wrote to work for Akka so that it can also work with http4s. The IETF spec lists the hashing and signing algorithms in section 3.3 that should be implemented. For the moment I'd feel most comfortable if RSA worked, as I know that quite well :-) (see the code SignatureVerifier). But I don't see any tests for public key signature nor any code Implementing it. Is that something you intend to support soon? Thanks for writing up the detailed issue! I would definitely like to support RSA and other public/private key algorithms. I think we can add some traits maybe KeyGen[F], Signer[F], Verifier[F] 🤔 open to ideas here. A PR for this would be phenomenal! Working on this in https://github.com/typelevel/bobcats/pull/48
2025-04-01T04:35:47.756628
2021-11-19T03:04:38
1058077099
{ "authors": [ "armanbilge" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:11721", "repo": "typelevel/feral", "url": "https://github.com/typelevel/feral/pull/51" }
gharchive/pull-request
Composable, effect-agnostic lambdas This was https://github.com/typelevel/feral/pull/45, before we started experimenting on that branch 😁 This re-works the original IOLambda concept, with two goals: Better composability Effect agnosticism A lambda is re-imagined as: type Lambda[F[_], Event, Result] = (Event, Context) => F[Option[Result]] And IOLambda now defines an abstract: def handler: Resource[IO, Lambda[IO, Event, Result]] This handler encapsulates the previously separate Setup: all resource acquisition goes into building the Lambda once, and that is the "setup". That Lambda is then installed as the handler. From here, we have two axes on which we can do composition: Middlewares/builders, e.g.: Lambda[F, Event, Result] => Lambda[F, Event, Result] HttpRoutes[F] => Lambda[F, ApiGatewayProxyEventV2, ApiGatewayProxyStructuredResultV2] Resource-injection, which is simply flatMapping resources: def routes[F[_]: Async](client: Client[F], db: Session[F]): HttpRoutes[F] = ??? def handler = for { client <- EmberClientBuilder.default[IO].build db <- skunk.Session.single[IO](...) } yield Http4sLambda(routes(client, db)) @bpholt and I have put this to good use in https://github.com/typelevel/feral/pull/45 and https://github.com/typelevel/feral/pull/50 to create a Natchez tracing middleware for lambda and I'm pretty confident that this is the right approach. What's the advantage to Lambda itself being parametric in F? Not really sure how to answer this. Why is http4s HttpRoutes/HttpApp parametric in F? I think these are pretty similar ideas. The new Setup encoding seems really strange to me, unless I'm misunderstanding what it is meant to be doing. Is this orthogonal to the Resource support idea? Hmm, after re-encoding the old way seemed really strange to me :P Not sure what you mean by orthogonal. Instead of "Setup" being an (optional) type parameter, Setup is now actually just the lambda handler i.e. (Event, Context) => Request. We use the Resource to build that lambda once, and we install it for all requests. Does that make more sense? Superseded by https://github.com/typelevel/feral/pull/60.
2025-04-01T04:35:47.769471
2022-03-19T21:34:35
1174376576
{ "authors": [ "scala-steward" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:11722", "repo": "typelevel/keypool", "url": "https://github.com/typelevel/keypool/pull/360" }
gharchive/pull-request
Update cats-effect-kernel, ... to 3.3.8 Updates org.typelevel:cats-effect-kernel org.typelevel:cats-effect-std from 3.3.7 to 3.3.8. GitHub Release Notes - Version Diff I'll automatically update this PR to resolve conflicts as long as you don't change it yourself. If you'd like to skip this version, you can just close this PR. If you have any feedback, just mention me in the comments below. Configure Scala Steward for your repository with a .scala-steward.conf file. Have a fantastic day writing Scala! Ignore future updates Add this to your .scala-steward.conf file to ignore future updates of this dependency: updates.ignore = [ { groupId = "org.typelevel" } ] labels: library-update, early-semver-patch, semver-spec-patch, commit-count:1 Superseded by #362.
2025-04-01T04:35:47.813085
2016-01-19T02:41:33
127352543
{ "authors": [ "dotta", "jroper", "patriknw" ], "license": "apache-2.0", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:11723", "repo": "typesafehub/sbteclipse", "url": "https://github.com/typesafehub/sbteclipse/pull/309" }
gharchive/pull-request
Ensure Eclipse plugin works when disabled In some situations, the eclipse plugin may be disabled on a particular project. If this happens, the eclipse command fails, saying the skipProject setting can't be found for that project. This ensures that in that situation, skipProject defaults to true, so that it doesn't fail, and the project is instead skipped. My particular use case is that we're writing a plugin that dynamically adds new projects to an sbt build - these projects are meta projects that resolve dependencies for services that need to be run, but not on the classpath of the users project. This dynamic project addition does not apply auto plugins, it's very specific about which settings get enabled on the dynamically added projects, so the eclipse plugin for example is not enabled on it, and we don't want it to be enabled, this is a meta project, not something that should appear in a users eclipse project. And we shouldn't have to make this project depend on the eclipse plugin just to disable the eclipse plugin. @dotta @patriknw here's the fix to the eclipse plugin issue. Build is failing due #310 - openjdk7 started segfaulting on travis a few weeks ago. LGTM! LGTM
2025-04-01T04:35:47.819536
2019-12-17T02:35:07
538796534
{ "authors": [ "aromanarguello", "laurosilvacom" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:11724", "repo": "typescript-cheatsheets/react-typescript-cheatsheet-es", "url": "https://github.com/typescript-cheatsheets/react-typescript-cheatsheet-es/pull/5" }
gharchive/pull-request
Translate contributing.md Translates CONTRIBUTING.md file to Spanish @laurosilvacom https://github.com/typescript-cheatsheets/react-typescript-cheatsheet-es/pull/5 Nice. But I we need to add the CONTRIBUTORS.md page. We can call it CONTRIBUYENTES or we can call itCOLABORADORES but it needs to be the same as CONTRIBUTORS.md. Let's leave the CONTRIBUYENDO.md as it is, for now. CONTRIBUYENDO.md = CONTRIBUTING.md ✅ got it! to make sure I understood. I should remove my changes from Contribuyendo.md (leave it as is with your content) and create a Colaboradores.md? Correct! Sorry about the miss understanding. No worries! will get on it :) @laurosilvacom for the contributors file will we use the all-contributors bot, or will we hard code it? The all-contributors bot. Are able to set it up? I can try 😅I've used in the past. The parent repo uses it, might as well try it out. What do you think? Go for it! Added it, let me know your thoughts. Some documentation on how to use: https://allcontributors.org/docs/en/bot/usage for anyone who might be interested
2025-04-01T04:35:47.873314
2019-01-04T06:41:36
395823530
{ "authors": [ "andolino", "b4dnewz" ], "license": "mit", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:11725", "repo": "typicode/json-server", "url": "https://github.com/typicode/json-server/issues/906" }
gharchive/issue
Accessing db.json in live I just want to access this url https://andolino.github.io/db/db.json using json-server to create a fake api @andolino here you go: json-server https://andolino.github.io/db/db.json as written in the documentation pretty easy
2025-04-01T04:35:47.887597
2024-11-20T09:28:49
2675136761
{ "authors": [ "elegaanz", "nleanba" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:11726", "repo": "typst/packages", "url": "https://github.com/typst/packages/pull/1299" }
gharchive/pull-request
marginalia:0.1.0 I am submitting [x] an update for a package I have fixed the links in the URL: The example import now includes the correct version The image is no longer broken The link to the documentation PDF goes now to the specific version, even if I update my repository. There are no changes to anything other than the readme. Thanks.
2025-04-01T04:35:47.890958
2024-04-13T21:59:36
2241795067
{ "authors": [ "S1ngularity96", "laurmaedje" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:11727", "repo": "typst/packages", "url": "https://github.com/typst/packages/pull/538" }
gharchive/pull-request
DRAFT: fhac:0.1.0 I am submitting [X] a new package [ ] an update for a package Description: The package serves as a template for bachelor's and master's thesis at the FH Aachen. I have read and followed the submission guidelines and, in particular, I [X] selected a name that isn't the most obvious or canonical name for what the package does [X] added a typst.toml file with all required keys [ ] added a README.md with documentation for my package [X] have chosen a license and added a LICENSE file or linked one in my README.md [ ] tested my package locally on my system and it worked [ ] excluded PDFs or README images, if any, but not the LICENSE [X] ensured that my package is licensed such that users can use and distribute the contents of its template directory without restriction, after modifying them through normal use. Closing this due to inactivity.
2025-04-01T04:35:47.893189
2024-07-31T10:34:22
2439712758
{ "authors": [ "MDLC01", "laurmaedje" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:11728", "repo": "typst/typst", "url": "https://github.com/typst/typst/pull/4648" }
gharchive/pull-request
Better document numbering functions A friend of mine had troubles understanding that numbering functions are just regular functions that accept numbers and return content. I added a section on the documentation for the numbering function that explains that, and points the user to the documentation for the numbering argument for more information. Thank you! I like it.
2025-04-01T04:35:47.910958
2016-03-31T04:05:16
144783455
{ "authors": [ "kirmaljeetkaur", "tzyganu" ], "license": "mit", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:11729", "repo": "tzyganu/stock-filter", "url": "https://github.com/tzyganu/stock-filter/issues/8" }
gharchive/issue
Extension is not showing on frontend? I have to installed stock filter plugin. But its not showing on frontend. @kirmaljeetkaur Do you have the out of stock products visible on your website? Thanks for reply Marius , its working now. On Mon, Apr 4, 2016 at 4:17 PM, Marius Strajeru<EMAIL_ADDRESS>wrote: @kirmaljeetkaur https://github.com/kirmaljeetkaur Do you have the out of stock products visible on your website? — You are receiving this because you were mentioned. Reply to this email directly or view it on GitHub https://github.com/tzyganu/stock-filter/issues/8#issuecomment-205238173 -- kimmi brar
2025-04-01T04:35:47.959417
2023-11-02T16:09:54
1974589328
{ "authors": [ "RobMeades", "benhaub" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:11730", "repo": "u-blox/ubxlib", "url": "https://github.com/u-blox/ubxlib/issues/156" }
gharchive/issue
Corrupted payload with uSockWrite Environment Module type: SARA-U201 Device type: Cell Network type: Cell Enable power pin: N/C (tied to 3v3) Power-on pin: N/C tied high VInt pin: N/C DTR power saving pin: N/C Transport type: UART Baud rate: 115200 CTS/RTS: N/C, pulled to ground with 2kΩ resistors Platform: esp-idf Firmware: esp-idf v5.1.1 ubxlib: bd8a69efdecc6655fd2f6fcaebc9d63dac1f7571 Example: main_tls.c with no TLS, no credential checking. Issue When opening a TCP cellular socket, the payload data is corrupted by the AT+USOWR command (i.e. the beginning of my payload is corrupted and an AT+USOWR is appended and sent to my http server which corrupts my request). I have a mod on my board which pulls CTS and RTS down to ground with 2kΩ resistors since they are not connected and serial communication over UART was not possible with those pins floating. See similar issue here: https://portal.u-blox.com/s/question/0D52p00009JhO5VCAV/sara-r412m-sending-at-command-on-socket Workaround To fix this issue I had to disable UART power saving for modules that don't use CTS/RTS and use Tx data to wakeup by commenting out the following: //else clause in u_cell_pwr.c:987 else { success = moduleConfigureOne(atHandle, "AT&K0", U_CELL_PWR_CONFIGURATION_COMMAND_TRIES); // RTS/CTS handshaking is not used by the UART HW, we // can use the wake-up on TX line feature without any // complications if (uAtClientWakeUpHandlerIsSet(atHandle) && U_CELL_PRIVATE_HAS(pInstance->pModule, U_CELL_PRIVATE_FEATURE_UART_POWER_SAVING)) { uartPowerSavingMode = U_CELL_PWR_PSV_MODE_DATA; } (i.e. do not send AT+UPSV=1,1300, or likely any other derivative of that command). Hi, and thanks very much for the detailed post. Let me replay what you've said to see if I've understood it correctly. UART power saving is on (it will be by default) and is set up to wake the module on TX line activity with a 6-second (1300 GSM frames) timeout, also the default. You have disabled CTS and RTS and tied both to ground. Under these circumstances, data corruption occurs when you use the sockets interface. An aspect of this corruption is that a bit of the next AT command ends up appended to the data you are sending. Is that a correct interpretation? First comment is that, it is probably not well documented but, you might be able to just define U_CFG_CELL_DISABLE_UART_POWER_SAVING for your build, to avoid needing to comment out any code. It seems likely that what is happening is that, somehow, the module is not properly awake after it has sent the @ character that prompts the ubxlib code for the data. Since the AT command that prompted the @ character said "I'm gonna send you this many characters", if it somehow misses characters at the start it will just pick them up from the thing that follows, which happens to be the next AT command, which will of course mess that one up also, all not good. First question, I guess, is whether power saving is important to you. If not you could hopefully just get on with life by either employing your workaround or defining U_CFG_CELL_DISABLE_UART_POWER_SAVING for your build. If power saving is important to you, do you happen to have a Saleae probe or similar and could capture the problem occurring on the UART interface with that? If we can see the exact timing we can likely figure out which bit is not awake when and mitigate this somehow in the sockets code. Is that a correct interpretation? Everything is correct except for the 4th point. The behaviour I see is that it's actually the previous AT command being sent along with my payload. I see AT+USOWR=sock,bytesToWrite, not +USOWR: sock,bytesWritten. The difference there being is that the previous command is on the Tx line along with my payload, while the next command is on the Rx line. is power saving is important to you Yeah it is important. My hardware team has left the CTS and RTS pins not connected to save on pin count so we can scope it to see what's going on. Thanks for the swift response. From a terminology point of view, I think taken from the ITU V.250 standard, the AT+BLAH thing would usually be referred to as the "command" and the "+BLAH:" thing that follows the "information response". To clarify then, if you were sending the payload "The quick brown fox jumps over the lazy dog": AT+USOWR=0,44 @The quick brown fox jumps over the lazy dog. +USOWR: 0,44 OK ...what ends up at the far end is: <some portion of AT+USOWR=0,44> Is there any chance that the bit at the end, the "some portion of AT+USOWR=sock,bytesToWrite, is from the next write to the socket? i.e. if sending in a sequence, might it be: AT+USOWR=0,44 @The quick brown fox jumps over the lazy dog. +USOWR: 0,44 OK AT+USOWR=... // This bit appended to the end of the intended message? Maybe paste in here a segment of AT log, to make it clear what's going on? defining U_CFG_CELL_DISABLE_UART_POWER_SAVING for your build. Tried this but the power saving configuration is set based upon enabled pins within moduleConfigure, which does not call uCellPwrEnableUartSleep, at least in the cell API. [corruption of some form][some portion of the intended payload][some portion of AT+USOWR=0,44] That is completely correct. Exactly. Is there any chance that the bit at the end, the "some portion of AT+USOWR=sock,bytesToWrite, is from the next write to the socket? No. I send a single packet with a payload of 137 bytes.. Here is some screenshots. I can see some slightly different behaviour when I change my message from an http request to "The quick brown fox jumps over the lazy dog" in that now the beginning of the payload is corrupted and replaced with the AT command where as before the payload was corrupted and appended with an AT command. ubxlib AT log: Wireshark: My server: Very interesting: so the payload is 44 bytes long (including the null terminator): "The quick brown fox jumps over the lazy dog[00]" Assuming that what you have highlighted in the Wireshark log is the payload, then there are: 7 bytes of corruption: Y[8a][d5][a5][8d][ad][81], the first 10 bytes of the intended payload ("The quick ") are missing, the remaining 34 bytes of the intended payload ("brown ... dog[00]") are received correctly, 3 bytes, [0d]AT, are added at the end, where I think the linefeed character, 0x0d, results in your print making it look like they are at the front. this makes a total of 7 + 34 + 3 = 44 bytes of payload; nice that something adds up. Of course the characters [0d]AT are visible in the ubxlib log. The only reason they would be sent by ubxlib, and be included in the payload, is if the module was sitting there waiting for more characters: at the same time ubxlib is waiting for the module to return OK and it hasn't so ubxlib is prodding it to make it do something. This is consistent with the module having missed characters from the start of the sequence: it wants 44 of them and hasn't got 44 of them. If [0d]AT were enough to make the module transmit the payload we can guess that it had only received 41 of the 44 bytes that it was sent. I've no idea why that might be but, purely as an experiment, would you be able to try changing the value here: https://github.com/u-blox/ubxlib/blob/cd01364006c3bcf1476b7711ae43b5fbf7b4a79b/cell/src/u_cell_sock.c#L1591 ....to be, say, 100, to see if that has any effect on the world? To be clear, this is not a solution, just an investigation. Seems like it made it better, but not quite fixing it. 100ms: 200ms 400ms 1200ms On the U_CFG_CELL_DISABLE_UART_POWER_SAVING, checking if I've misunderstood: the code here, which would be omitted if U_CFG_CELL_DISABLE_UART_POWER_SAVING were defined, is the code that sets uCellPrivateWakeUpCallback() as a wake-up handler inside the AT Client; so if U_CFG_CELL_DISABLE_UART_POWER_SAVING is defined for the build, no wake-up handler should be set inside the AT Client, this handler, inside the AT Client, is the thing that sends that AT we talked about above if there has been no AT command for, in this case, for 6 seconds: this is required, when UART power saving, is active to wake the module up. the code over in moduleConfigure() checks whether such a wake-up handler has been set in the AT Client by calling uAtClientWakeUpHandlerIsSet(); it does this in both halves of the if() check concerning whether the flow control lines are on or not, and in most of the other if() checks that follow; if no AT Client wake-up handler has been set this should always return false. Basically, the intention is that if that handler is not set no power saving of any form should occur. Can you expand a little on why this does not have the desired effect for you? Seems like it made it better, but not quite fixing it. Thanks for doing that and, yes, it is utterly mad. Even assuming that UART power saving were enabled, the wake-up from 32 kHz sleep inside the module takes a few milliseconds, not half a second. Could you possibly paste into here a log of all the rubbish that ubxlib spits out, from when it starts up, in case anything is going wrong with any of the configuration steps? U_CELL: initialising with enable power pin not connected, PWR_ON pin not connected and VInt pin not connected. [00]AT AT OK U_CELL_PWR: powering on, module is already on. ATE0 ATE0 OK AT+CMEE=2 OK AT+UDCONF=1,0 +UMWI: 0,1 +UMWI: 0,2 +UMWI: 0,3 +UMWI: 0,4 OK ATI9 23.60,A01.01 OK AT&C1 OK AT&D1 OK AT&K0 OK AT+UPSV=1,1300 OK AT+UGPRF? AT AT OK AT+CFUN=0 OK Opened device with return code 0. Bringing up the network... U_CELL_NET: preparing to register/connect... AT+CREG=2 OK AT+CGREG=2 OK AT+CIMI 234500023270006 OK U_CELL_NET: user-specified APN is "jtm2m". U_CELL_NET: setting automatic network selection mode... AT+COPS? +COPS: 0 OK AT+CFUN=1 OK AT+CREG? +CREG: 2,0 OK 0: NReg AT+CGREG? +CGREG: 2,0 OK 0: NReg AT+CREG? +CREG: 2,0 OK 0: NReg AT+CGREG? +CGREG: 2,0 OK 0: NReg AT+CREG? +CREG: 2,0 OK 0: NReg AT+CGREG? +CGREG: 2,0 OK 0: NReg AT+CREG? +CREG: 2,0 OK 0: NReg AT+CGREG? +CGREG: 2,0 OK 0: NReg AT+CREG? +CREG: 2,0 OK 0: NReg AT+CGREG? +CGREG: 2,0 OK 0: NReg AT+CREG? +CREG: 2,0 OK 0: NReg AT+CGREG? +CGREG: 2,0 OK 0: NReg AT+CREG? +CREG: 2,0 OK 0: NReg AT+CGREG? +CGREG: 2,0 OK 0: NReg AT+CREG? +CREG: 2,0 OK 0: NReg AT+CGREG? +CGREG: 2,0 OK 0: NReg AT+CREG? +CREG: 2,0 OK 0: NReg AT+CGREG? +CGREG: 2,0 OK 0: NReg AT+CREG? +CREG: 2,0 OK 0: NReg AT+CGREG? +CGREG: 2,0 OK 0: NReg AT+CREG? +CREG: 2,0 OK 0: NReg +CREG: 5,"EA6D","544C28",2 +CGREG: 5,"EA6D","544C28",2,"01" 2: RegR 2: RegR AT+COPS=3,0 OK AT+COPS? +COPS: 0,0,"Rogers Wireless",2 OK AT+CGATT? +CGATT: 1 OK AT+UPSD=0,1,"jtm2m" OK AT+UPSD=0,2,"" OK AT+UPSD=0,3,"" OK AT+UPSD=0,7,"<IP_ADDRESS>" OK AT+UPSD=0,6,3 OK AT+UPSDA=0,3 OK U_CELL_NET: connected after 12 second(s). Looking up server address... Address is: IPV4 <IP_ADDRESS> Creating socket... AT+USOCR=6 +USOCR: 0 OK U_SOCK: socket created, descriptor 0, network handle 0x3fcc0b18, socket handle 0. U_SOCK: connecting socket to "<IP_ADDRESS>:8080"... AT+USOCO=0,"<IP_ADDRESS>",8080 OK U_SOCK: socket with descriptor 0, network handle 0x3fcc0b18, socket handle 0, is connected to address "<IP_ADDRESS>:8080". Sending data... AT+USOWR=0,44 @The quick brown fox jumps over the lazy dog[00]AT AT +USOWR: 0,44 OK AT+USOER +USOER: 0 OK Sent 44 byte(s) to echo server. Done. That looks quite sensible, UART power saving is enabled if nothing happens for 6 seconds. What we appear to be seeing, however, is only the periodic wake-up on paging and NOT the wake-up on a character arriving at the module. FYI, integration manual: https://content.u-blox.com/sites/default/files/SARA-G3-U2_SysIntegrManual_UBX-13000995.pdf ...page 58, "Wake-up via data reception", is the detailed reference. It says: On SARA-U2 series, the TXD input line is configured to wake up the system via data reception if AT+UPSV=1 is set with hardware flow control disabled We have that, AT&K0 being the bit that disables the flow control. I will continue thinking... If you get a moment it might be worth adding a call to uAtClientTimestampSet(0), just after you have brought the cellular device up, and paste here another log of the bit around the socket send. This will put timestamps into the AT prints, which might be useful. U_CELL_NET: connected after 11 second(s). Looking up server address... Address is: IPV4 <IP_ADDRESS> Creating socket... 0000/01/01 00:00:00.049: AT+USOCR=6 0000/01/01 00:00:00.089: 0000/01/01 00:00:00.089: +USOCR: 0 0000/01/01 00:00:00.090: 0000/01/01 00:00:00.090: OK U_SOCK: socket created, descriptor 0, network handle 0x3fcc0b18, socket handle 0. U_SOCK: connecting socket to "<IP_ADDRESS>:8080"... 0000/01/01 00:00:00.149: AT+USOCO=0,"<IP_ADDRESS>",8080 0000/01/01 00:00:00.529: 0000/01/01 00:00:00.529: OK U_SOCK: socket with descriptor 0, network handle 0x3fcc0b18, socket handle 0, is connected to address "<IP_ADDRESS>:8080". Sending data... 0000/01/01 00:00:00.589: AT+USOWR=0,44 0000/01/01 00:00:00.639: 0000/01/01 00:00:00.639: @0000/01/01 00:00:00.689: The quick brown fox jumps over the lazy dog[00]AT 0000/01/01 00:00:08.759: AT 0000/01/01 00:00:08.919: 0000/01/01 00:00:08.919: +USOWR: 0,44 0000/01/01 00:00:08.920: 0000/01/01 00:00:08.920: OK 0000/01/01 00:00:08.920: AT+USOER 0000/01/01 00:00:08.959: 0000/01/01 00:00:08.959: +USOER: 0 0000/01/01 00:00:08.959: 0000/01/01 00:00:08.960: OK Sent 44 byte(s) to echo server. Done. Just weird. The module must have been awake at 0.589 seconds, because it reacted to the AT+USOWR=0,44 and sent the @ prompt that arrived 50 milliseconds later and that we reacted to by sending the payload 50 milliseconds after that. Losing the first 7 characters of the payload to garbage makes no sense, I can't see how it can be down to power saving. Yet your experiments show that setting UPSV=0 stops the problem occurring. Hmph. Yeah just, for your sanity, here is a working example where this time the power saving command was AT+UPSV=0 since U_CFG_CELL_DISABLE_UART_POWER_SAVING was defined. Wireshark: My server: Ubxlib AT log I'm sane! Well, at least, thank you for the straw to clutch at. Having a working thing is a good start. FYI, the main support guys for SARA-U201 are in Italy which is currently experiencing one of those European national holidays that are on a Thursday and so no-one will be in now until next week. If nothing has occurred to me by then I know who to contact though. OK: do you want the bad news or the bad news [but, don't worry, there is a way forward at the end :-)]? First thing, in case you weren't aware, is that, due to lack of availability of a critical component, SARA-U201 was "end of lifed" a little while ago: https://www.u-blox.com/sites/default/files/SARA-U201_EOL_UBX-22002617.pdf In fact we were planning on beginning deprecation of it in ubxlib from our next release (1.4, due start 2024), eventually dropping support for it likely mid next year, though if people are still using it we will not do that. The reason I mention this is that a thorough search of our internal JIRA database reveals that there is a bug in SARA-U201 which means that, when UART power saving is enabled, it immediately goes into 32 kHz sleep, if it can, after it has sent the @ prompt. And because the product is end of life this was not fixed. Crazy, I know, but seems to be the case. There are two workarounds for this: call uCellSockHexModeOn() to use hex mode instead of binary mode; in this case there is no @ prompt, so no @ prompt problem, but it will halve the amount of data that can go into each call to uCellSockWrite(); doesn't halve the throughput or anything, just the number of calls you might need to make if the amount of data is > 512ish bytes. hack something like this into both uCellSockWrite() and probably also uCellSockSendTo() (for the UDP case): uAtClientLock(atHandle); // Existing AT Client lock function // Disable UART power saving while doing the @ process uAtClientCommandStart(atHandle, "AT+UPSV=0"); uAtClientCommandStopReadResponse(atHandle); uAtClientCommandStart(atHandle, "AT+USxxx="); // Existing USOWR/USOST command ... // Send etc. is performed // At the end, re-enable UART power saving uAtClientCommandStart(atHandle, "AT+UPSV=1,1300"); uAtClientCommandStopReadResponse(atHandle); uAtClientUnloock(atHandle); // Existing AT Client unlock function Taking the second approach will slow things down somewhat, an additional pair of AT exchanges per call to uCellSockWrite()/uCellSockSendTo(), doing the first doesn't have any significant drawbacks if you're only using TCP; for UDP it has an impact since the maximum UDP packet size is reduced, but since TCP is a stream you will be unlikely to notice anything. An additional question: what is your "usage scenario"? Is it one where the device will sit in idle, transmitting, say, every few minutes/10s of minutes, maybe waiting for stuff to arrive on the downlink of an established TCP socket, or is it one where the device will power up, register, do something, and then switch off again to save power? Fundamentally, what is the periodicity of data transmission and how much dependency is there on downlink stuff? I ask because, if your use-case falls more into the second camp then UART power saving isn't going to save you very much anyway, you will burn the vast majority of your power waking up and registering; UART power saving is only helpful in the first camp. It was some planned testing I have to do. First thing, in case you weren't aware, is that, due to lack of availability of a critical component, SARA-U201 was "end of lifed" a little while ago: I did not know that but I think we'll be using them for a little while longer considering what we have in inventory. There are two workarounds for this Thanks. Either one of those should work. We have quite a bit of unused cycles on our s3. what is your "usage scenario"? Right now we send data at 1Hz, but I had planned to see what the power savings would be if that were decreased to 0.5Hz or 0.2Hz. Right now we send data at 1Hz, but I had planned to see what the power savings would be if that were decreased to 0.5Hz or 0.2Hz. You will be remaining registered for these kinds of rates, not powering off, so agreed that AT+UPSV is the thing. Either one of those should work. We have quite a bit of unused cycles on our s3. I'd go for the "hex mode" approach, there should be no down-sides, you can still use AT+UPSV and you don't have to make any hacky code modifications. we'll be using them for a little while longer considering what we have in inventory. In that case we won't deprecate SARA-U201 just yet. Note, though, that we no longer actively test ubxlib with a SARA-U201: this is because the UK networks that we are using, for 2G, now implement the awful/stupid GSMA radio limitations which mean that anything that powers on and off again frequently, which we inevitably do during testing, gets banned from the network for an hour, making meaningful test operation impossible. All should still be fine (we test everything on Cat-M1/LTE RAT), please let us know if you come up against any problems and we will address them. Thanks Rob, will go with hex mode. Good stuff, gonna close this one, please re-open or open a new one as necessary.
2025-04-01T04:35:48.152975
2024-01-22T16:52:39
2094337059
{ "authors": [ "uNetworkingAB" ], "license": "apache-2.0", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:11731", "repo": "uNetworking/uWebSockets", "url": "https://github.com/uNetworking/uWebSockets/issues/1701" }
gharchive/issue
Turn LocalCluster into an App type rather than wrapper The interface of LocalCluster is simple, but can be even simpler: it can simply become an App type that simply applies every registered handler to all underlying apps. Then you can change from SSLApp to ClusteredApp or ClusteredSSLApp in whatever way you want, changing no code. hmmmmmm.. sounds good possibly too simplified... you probably want to get your ID along the App so that you can register handlers that know what isolated part they corresponds to, and that interface is not the same
2025-04-01T04:35:48.156471
2022-11-30T18:42:56
1470070990
{ "authors": [ "BobTorgerson", "brucecrevensten" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:11732", "repo": "ua-snap/arctic-eds", "url": "https://github.com/ua-snap/arctic-eds/issues/237" }
gharchive/issue
Add Elevation, Geology, Ecoregion to introduction as inline text Add Elevation, Geology, Ecoregion to introduction as inline text (with links out to the best online resource for Ecoregion classifications + the USGS Geology map). Read the NCR report intro for motivation for how to do this. Complete as part of #236 Complete! Will need more revision, and in particular we may want to consider removing geology/ecoregion altogether (unsure) and also perhaps making elevation cover a 20km pixel instead of 1km.
2025-04-01T04:35:48.194090
2020-03-25T06:02:09
587463320
{ "authors": [ "coveralls", "yux0" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:11733", "repo": "uber/cadence", "url": "https://github.com/uber/cadence/pull/3137" }
gharchive/pull-request
Add failover start time and timeout in domain data What changed? Add failover config in domain data Why? The failover config is for graceful domain failover. The config How did you test it? Database tests Potential risks Fail to update DV schema prior to new deployment will cause service startup failure Coverage decreased (-0.4%) to 67.145% when pulling 75c473133233ea4eca5e22743c81a75686c0f2a8 on domain_failover_config into 31d2619460f59877a2f080b0415c1290a795916f on master.
2025-04-01T04:35:48.201181
2019-11-27T12:57:56
529330449
{ "authors": [ "Pessimistress", "wlfei0502" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:11734", "repo": "uber/deck.gl", "url": "https://github.com/uber/deck.gl/issues/3936" }
gharchive/issue
About serration of the ScatterPointLayer Target Use case More smoother border of the point is better。 Proposed feature I compare Mapbox.gl to Deck.gl on drawing point. The Mapbox.gl draws more smoother than the Deckgl. Here is the result: Mapbox.gl: Deck.gl: Code deck.gl const deckgl = new deck.DeckGL({ container: 'container', mapboxApiAccessToken: 'pk.eyJ1Ijoid2xmZWkiLCJhIjoiY2puMTB6MXZlNHZjcTNwbnl3dnowYjhoaSJ9.s6ZkjRHGIY6xVNBRAf52MQ', initialViewState: { latitude: 51.47, longitude: 0.45, zoom: 4, bearing: 0, pitch: 0, }, controller: true, layers: [ new deck.GeoJsonLayer({ id: 'airports', data: AIR_PORTS, // Styles filled: true, pointRadiusMinPixels: 20, pointRadiusScale: 20, getRadius: f => (11 - f.properties.scalerank), getFillColor: [255, 140, 0, 255], lineWidthMinPixels:2, lineWidthMaxPixels:2, getLineColor: [200, 0, 80, 255], // Interactive props pickable: true, autoHighlight: true, }) ], }); mapbox.gl const map = deckgl.getMapboxMap(); map.on('load', function () { // The data option must be replaced by the content of the geojson. Here is only tip. map.addSource('point', {type:'geojson', data:'https://d2ad6b4ur7yvpq.cloudfront.net/naturalearth-3.3.0/ne_10m_airports.geojson'}); map.addLayer({ "id": "point", "source": "point", "type": "circle", "paint": { "circle-radius": 20, "circle-color": "rgb(255, 140, 0)", "circle-stroke-width":2, "circle-stroke-color":"rgb(200, 0, 80)", } }); }); What is your device pixel ratio? Try set useDevicePixels: 2 on Deck. My device pixel ratio is 1.5 Can the deck.gl not recognize device pixel ratio by itself deck.gl uses window.devicePixelRatio by default. You can try manually increasing it and see if it reduces the artifact. Can the deck.gl not recognize device pixel ratio by itself deck.gl uses window.devicePixelRatio by default. You can try manually increasing it and see if it reduces the artifact. Thanks a lot. Set useDevicePixels: 2 is very effective.
2025-04-01T04:35:48.204867
2019-01-13T19:28:05
398690386
{ "authors": [ "heshan0131", "macrigiuseppe" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:11735", "repo": "uber/kepler.gl", "url": "https://github.com/uber/kepler.gl/issues/326" }
gharchive/issue
Better image export Describe the bug The current export functionality requires to load css and it's trigger a CORS request and it fails. To Reproduce Steps to reproduce the behavior: Create a map Try to export the map Expected behavior Export image should work properly Multiple tasks were filed against this issue: #212 #155 #150 Possible Implementation Use html-screen-capture-js to generate image (rulesToAddToDocStyle for ratio) https://github.com/uber/kepler.gl/pull/360
2025-04-01T04:35:48.207767
2024-11-28T02:07:48
2700455183
{ "authors": [ "sidepelican" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:11736", "repo": "uber/mockolo", "url": "https://github.com/uber/mockolo/pull/277" }
gharchive/pull-request
Bump up version to 2.2.0 prepare new version. I see https://github.com/uber/mockolo/issues/261#issuecomment-2503521093, and concurrency support may take some time, so I thought it would be a good idea to do a release once.
2025-04-01T04:35:48.208655
2016-06-18T01:14:38
160999733
{ "authors": [ "LINKIWI", "harlantwood" ], "license": "mit", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:11737", "repo": "uber/npm-shrinkwrap", "url": "https://github.com/uber/npm-shrinkwrap/pull/111" }
gharchive/pull-request
Note: npm >= 3 is not supported. See also #83 @lxe / @Raynos Can we merge this? This is a source of confusion with this package; a note in the README would be helpful
2025-04-01T04:35:48.257446
2021-05-07T21:39:30
879791924
{ "authors": [ "ketkarameya", "lazaroclapp" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:11738", "repo": "uber/piranha", "url": "https://github.com/uber/piranha/pull/128" }
gharchive/pull-request
[WIP] [PiranhaJava] Support fixing enum fields with trailing semicolon (There has to be a better way of doing this, just making a PR so we can track this and a possible solution) @lazaroclapp I think this recent PR solves a similar problem.
2025-04-01T04:35:48.264149
2015-02-10T19:28:03
57223204
{ "authors": [ "iproctor", "jmccarthy14" ], "license": "mit", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:11739", "repo": "uber/sevnup", "url": "https://github.com/uber/sevnup/pull/9" }
gharchive/pull-request
alternative to filter @iproctor definitely more confusing in the code, but the consumers life is more consistent as you mentioned yup
2025-04-01T04:35:48.266195
2019-06-25T03:16:00
460187305
{ "authors": [ "LinLidi", "jon-tri" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:11740", "repo": "uber/xviz", "url": "https://github.com/uber/xviz/issues/486" }
gharchive/issue
How can I convert my dataset into xviz protocol? Hi, I have just start a streetscape.gl example with xviz stream data which is kitti dataset, if I want to convert my dataset into xviz protocol, maybe json files, are there anything guides? Thanks. Best wishes. Have you already looked at this: https://github.com/uber/xviz-data ? @jon-tri I had got it. thanks so much.
2025-04-01T04:35:48.276183
2023-08-11T01:06:47
1846068171
{ "authors": [ "rndquu", "web4er" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:11741", "repo": "ubiquity/ubiquibot", "url": "https://github.com/ubiquity/ubiquibot/pull/607" }
gharchive/pull-request
Update readme, add instructions for payment permit locally This PR doesn't resolve an existing issue. This PR does the following: update outdated instructions in the readme file add a manual method to setup payment permits and private keys related to it for a local instance of ubiquibot Why didn't I add an automatic method? It appears the automatic method https://pay.ubq.fi/onboarding is for clients who are using an official instance of ubiquibot. Because it doesn't provide x25519_PRIVATE_KEY for the encrypted private key which is needed in a local instance. QA-part1: https://github.com/web4er/ubiquibot/issues/79 QA-part2: https://github.com/web4er/ubiquibot/issues/78 It appears the automatic method https://pay.ubq.fi/onboarding is for clients who are using an official instance of ubiquibot. Because it doesn't provide x25519_PRIVATE_KEY for the encrypted private key which is needed in a local instance. Exactly I have made the required changes. This PR is ready for a new review. @pavlovcik @whilefoo Need to approve this if you think your concerns have been addressed. @whilefoo @pavlovcik @rndquu please approve this PR so that it can be merged.
2025-04-01T04:35:48.282804
2023-09-20T09:18:02
1904559918
{ "authors": [ "Keyrxng", "pavlovcik", "rndquu" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:11742", "repo": "ubiquity/ubiquity-dollar", "url": "https://github.com/ubiquity/ubiquity-dollar/issues/791" }
gharchive/issue
Rivet "React DevTools for crypto" from Paradigm https://twitter.com/keyrxng/status/1696579627756449864 @Keyrxng do you think it makes sense for us to add Rivet instead? I was going to suggest that when I saw it but it will require viem and wagmi upgrades for sure if trying to embed it somehow otherwise have it included in dev flow as a standalone tool to use. That's why I didn't think it was worth re-suggesting due to the previous talk on viem etc. But as a standalone it would benefit devs far more than what I was able to pipe into the UI as I didn't know of EIP-6963 until after I saw Rivet if I'm being honest. https://www.paradigm.xyz/oss/viem https://www.paradigm.xyz/2022/11/paradigm-and-wagmi https://github.com/paradigmxyz/rivet Rivet is a browser extension that calls the same anvil's "cheat" RPC methods as already implemented in this issue. It doesn't require any special integration on our side so Rivet can be used as a standalone dev tool. I was going to suggest that when I saw it but it will require viem and wagmi upgrades for sure if trying to embed it somehow otherwise have it included in dev flow as a standalone tool to use. That's why I didn't think it was worth re-suggesting due to the previous talk on viem etc. But as a standalone it would benefit devs far more than what I was able to pipe into the UI as I didn't know of EIP-6963 until after I saw Rivet if I'm being honest. And it just has a shitton of data visualization which there isn't much of other than the anvil console in what I pushed but Rivet I believe trumps that. https://www.paradigm.xyz/oss/viem https://www.paradigm.xyz/2022/11/paradigm-and-wagmi https://github.com/paradigmxyz/rivet Talked to the creator of TypeChain (he came over to my house during Korea Blockchain Week lol) and he said that it is now deprecated in favor of viem. @rndquu something to consider if our current tech stack becomes an issue.
2025-04-01T04:35:48.283891
2022-09-02T15:01:52
1360313550
{ "authors": [ "apoufortin", "louis-z" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:11743", "repo": "ubisoft/NGitLab", "url": "https://github.com/ubisoft/NGitLab/pull/278" }
gharchive/pull-request
Adding support for group milestones and group issues Adding support for group milestones in the MilestoneClient. Can you add 1 or more tests to ./NGitLab.Tests/Milestone/MilestoneClientTests.cs, to validate creation and retrieval of group milestones?
2025-04-01T04:35:48.287676
2023-10-16T03:32:45
1944283982
{ "authors": [ "reshnashrestha", "s4il3sh" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:11744", "repo": "ubsuny/23-Homework4G1", "url": "https://github.com/ubsuny/23-Homework4G1/issues/43" }
gharchive/issue
could not find linting Hope you guys are working on linting for your unittest file for github action. Yes @s4il3sh we will be working on it. Thank you for noticing it. Ok. Great! We have worked on it. Can you check @s4il3sh
2025-04-01T04:35:48.288532
2023-11-26T05:25:02
2010898035
{ "authors": [ "AhmedCode99", "reshnashrestha" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:11745", "repo": "ubsuny/23-Homework7G2", "url": "https://github.com/ubsuny/23-Homework7G2/issues/9" }
gharchive/issue
are we supposed to keep unittesting.yml file? As described in hw task, no unit tests are necessary, what's the point of unittesting.yml? I think it is there by mistake. You can have the maintainer if the repo delete it
2025-04-01T04:35:48.304182
2024-03-23T09:28:28
2203791124
{ "authors": [ "linghengqian" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:11746", "repo": "ubuntu/WSL", "url": "https://github.com/ubuntu/WSL/issues/449" }
gharchive/issue
The jetbrains-toolbox.desktop file of jetbrains-toolbox under Ubuntu WSL cannot be used cmd /c ver Microsoft Windows [版本 10.0.22631.3296] wsl --status 默认分发: Ubuntu-22.04 默认版本: 2 Did the problem occur during installation? [ ] Yes What happened? The jetbrains-toolbox.desktop file of jetbrains-toolbox installed through ./jetbrains-toolbox --install under Ubuntu WSL cannot be used. To reproduce this issue, I need to install an Ubuntu 22.04.3 instance on Windows 11 via https://www.microsoft.com/store/productId/9PN20MSR04DW . Execute the following command in Ubuntu Shell to install jetbrains-toolbox. I neglected to adjust WSL to the Simplified Chinese environment through sudo dpkg-reconfigure locales and sudo apt install language-pack-gnome-zh-hans-base -y here, because it makes no sense. sudo apt update && sudo apt upgrade -y sudo apt install gnome-text-editor gimp vlc nautilus x11-apps -y cd /tmp wget https://packages.microsoft.com/repos/edge/pool/main/m/microsoft-edge-stable/microsoft-edge-stable_122.0.2365.92-1_amd64.deb?brand=M102 -O ./microsoft-edge-stable.deb sudo apt install --fix-missing ./microsoft-edge-stable.deb -y sudo apt install libfuse2 libxi6 libxrender1 libxtst6 mesa-utils libfontconfig libgtk-3-bin tar -y wget https://download-cdn.jetbrains.com/toolbox/jetbrains-toolbox-<IP_ADDRESS>90.tar.gz -O ./jetbrains-toolbox.tar.gz tar -xzf ./jetbrains-toolbox.tar.gz ./jetbrains-toolbox-<IP_ADDRESS>90/jetbrains-toolbox --install rm -rf /tmp/jetbrains-toolbox-<IP_ADDRESS>90/ ~/.local/share/applications/jetbrains-toolbox.desktop ./jetbrains-toolbox --install command comes from https://youtrack.jetbrains.com/issue/TBX-2314/Snap-package#focus=Comments-27-2932683.0-0 . The steps to execute ~/.local/share/applications/jetbrains-toolbox.desktop occur after closing the jetbrains-toolbox interface. An Error Log will be thrown at this time. $ ~/.local/share/applications/jetbrains-toolbox.desktop /home/linghengqian/.local/share/applications/jetbrains-toolbox.desktop: 行 1: [Desktop: 未找到命令 /home/linghengqian/.local/share/applications/jetbrains-toolbox.desktop: 第 3 行: fg: 无任务控制 /home/linghengqian/.local/share/applications/jetbrains-toolbox.desktop: 行 7: Toolbox: 未找到命令 /home/linghengqian/.local/share/applications/jetbrains-toolbox.desktop: 行 11: X-GNOME-Autostart-enabled=true: 未 找到命令 /home/linghengqian/.local/share/applications/jetbrains-toolbox.desktop: 行 13: X-GNOME-Autostart-Delay=10: 未找到 命令 /home/linghengqian/.local/share/applications/jetbrains-toolbox.desktop: 行 14: X-MATE-Autostart-Delay=10: 未找到命令 /home/linghengqian/.local/share/applications/jetbrains-toolbox.desktop: 行 15: X-KDE-autostart-after=panel: 未找到命令 In this case, I can only start jetbrains-toolbox via ~/.local/share/JetBrains/Toolbox/bin/jetbrains-toolbox. What was expected? Executing ~/.local/share/applications/jetbrains-toolbox.desktop should open jetbrains-toolbox normally. Steps to reproduce More specific steps are above. Or https://youtrack.jetbrains.com/issue/TBX-11704 . sudo apt update && sudo apt upgrade -y sudo apt install gnome-text-editor gimp vlc nautilus x11-apps -y cd /tmp wget https://packages.microsoft.com/repos/edge/pool/main/m/microsoft-edge-stable/microsoft-edge-stable_122.0.2365.92-1_amd64.deb?brand=M102 -O ./microsoft-edge-stable.deb sudo apt install --fix-missing ./microsoft-edge-stable.deb -y sudo apt install libfuse2 libxi6 libxrender1 libxtst6 mesa-utils libfontconfig libgtk-3-bin tar -y wget https://download-cdn.jetbrains.com/toolbox/jetbrains-toolbox-<IP_ADDRESS>90.tar.gz -O ./jetbrains-toolbox.tar.gz tar -xzf ./jetbrains-toolbox.tar.gz ./jetbrains-toolbox-<IP_ADDRESS>90/jetbrains-toolbox --install rm -rf /tmp/jetbrains-toolbox-<IP_ADDRESS>90/ ~/.local/share/applications/jetbrains-toolbox.desktop Additional information Null. Already posted at https://youtrack.jetbrains.com/issue/TBX-11704/The-jetbrains-toolbox.desktop-file-of-jetbrains-toolbox-installed-through-.-jetbrains-toolbox-install-under-Ubuntu- WSL-cannot-be#focus=Comments-27-9581112.0-0 After receiving the reply, this issue can be closed.
2025-04-01T04:35:48.352512
2017-04-26T22:32:03
224618609
{ "authors": [ "ducky64", "jackkoenig" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:11747", "repo": "ucb-bar/chisel3", "url": "https://github.com/ucb-bar/chisel3/pull/595" }
gharchive/pull-request
Add implicit CompileOptions to Record and Bundle. Fixes #495 Helps distinguish between Records/Bundles defined in Chisel._ vs. chisel3.. Also override compilationOptions when bulk connecting Records/Bundles defined in Chisel.. This allows Records/Bundles defined in Chisel._ code to be correctly bulk connected in chisel3._ code. Which implicit compile option is used when a Record method that takes a compile option is called? On Wed, Apr 26, 2017 at 6:01 PM Richard Lin<EMAIL_ADDRESS>wrote: @ducky64 approved this pull request. lgtm — You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/ucb-bar/chisel3/pull/595#pullrequestreview-34998400, or mute the thread https://github.com/notifications/unsubscribe-auth/AIL84tjpw0z_SiT707mDgtRKd2QjYJhnks5rz-j1gaJpZM4NJgb4 . According to https://stackoverflow.com/questions/5598085/where-does-scala-look-for-implicits, first looks in current scope. I think if there are multiple options in any of those priorities, the compiler will error out with an ambiguous implicit. I guess it's helpful that Record methods don't have implicit arguments, though superclass methods do. It seems that this isn't exactly doing what I wanted, so I'm taking another look. As discussed last week, not going to let the perfect be the enemy of the good. I'm going to merge this when updating passes Jenkins. This currently issues a Firrtl bulk connect whenever a Chisel._ Record/Bundle is used in chisel3._ code. The better proposed solution is to still do the standard chisel3._ bulk connect walk through leaf elements, but be a bit more permissive for Chisel._ bundles (eg. no explicit direction is Output). There appears to be a compiler issue with rocket-chip, debugging. False alarm, we good
2025-04-01T04:35:48.372628
2022-08-07T18:37:05
1331090765
{ "authors": [ "paulstatezny" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:11748", "repo": "ucbi/uniform", "url": "https://github.com/ucbi/uniform/pull/31" }
gharchive/pull-request
Guide for auto-committing to ejected repos Closes #24 Closes #27 @capitalist – heads up about this commit https://github.com/ucbi/uniform/pull/31/commits/da091ef0eed7562897f196232bd5105c919938bd which changes the public API per #27 Just wanted to make sure you're on board before I merge.
2025-04-01T04:35:48.425275
2022-08-30T18:36:07
1356154985
{ "authors": [ "ajnelson-nist", "kchason" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:11749", "repo": "ucoProject/UCO", "url": "https://github.com/ucoProject/UCO/pull/478" }
gharchive/pull-request
MyPy Fixes Addresses errors so that mypy can run against the tests/ directory. Note: the only remaining errors preventing mypy --strict from being run pertain to pytest annotations. Changed argument type definition Added return types The pytest annotation bit may be related to this Issue. Thanks, @kchason , for fielding this issue. For everyone else's background, the case-utils repo has had to ignore a type error I'd accidentally made in the UCO pytest script. I didn't catch it until implementing mypy in case-utils, and left a TODO here (today's state) due to needing to focus on something else at the moment. Because this is a change solely to a Python script, and further to artifacts that don't have a runtime effect (unless symbol references are broken), I am considering this a bugfix proposal. I'm going to wait for CI to pass, and then will merge this.
2025-04-01T04:35:48.449438
2017-03-27T05:56:28
217143816
{ "authors": [ "BernhardRode", "mvirgo" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:11750", "repo": "udacity/CarND-Term1-Starter-Kit", "url": "https://github.com/udacity/CarND-Term1-Starter-Kit/pull/70" }
gharchive/pull-request
feature: install gpu accelerated environment for mac & linux with all… … needed deps We need to specify the tensorflow versions to ensure compatibility as it gets upgraded, so we will not merge this pull request.
2025-04-01T04:35:48.470203
2022-03-30T18:27:50
1186831245
{ "authors": [ "diegonc", "udos86" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:11751", "repo": "udos86/flutter-fast-forms", "url": "https://github.com/udos86/flutter-fast-forms/issues/16" }
gharchive/issue
What's FastSimpleCustomField? I'm trying to implement a custom field but the class FastSimpleCustomField cannot be imported from nowhere. Is the custom field functionality working? @diegonc There's a chapter on implementing custom form fields the README.md @diegonc Alright, I see, I'll fix the docs. Thanks!
2025-04-01T04:35:48.471777
2022-09-01T01:03:45
1358103292
{ "authors": [ "kuchienkz", "udos86" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:11752", "repo": "udos86/flutter-fast-forms", "url": "https://github.com/udos86/flutter-fast-forms/issues/28" }
gharchive/issue
Missing implementation of [ChoiceChip.iconTheme] and [ChoiceChip.surfaceTintColor] on Flutter version 3.3.0 After updating Flutter to latest version (v3.3.0 - 8/31/2022), my web project won't compile, because those missing implementations. I hope you can fix this asap, since you just have to literally add, like, 4 lines of code to fix this. fixed in 8.0.0
2025-04-01T04:35:48.494808
2020-11-25T13:58:12
750839665
{ "authors": [ "kosarko" ], "license": "BSD-3-Clause", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:11753", "repo": "ufal/clarin-dspace", "url": "https://github.com/ufal/clarin-dspace/issues/957" }
gharchive/issue
Fix piwik statistics Items submitted after 2020-05-15 show no views in the piwik-statistics tab (and the item submitted on 2020-05-15 (ud 2.6) shows views only until may 29th) For lindat repository the fix must happen in https://github.com/ufal/lindat_piwik_reports as we use that for caching the results. I suspect the issue is there also for the "non-cached" version. I believe the underlying issue is an update to piwik >=3.12.0, which "fixes" the behavior of segments (https://github.com/matomo-org/matomo/issues/11900, also see https://forum.matomo.org/t/filter-page-urls-by-segments/34859/8 and especially https://forum.matomo.org/t/filter-page-urls-by-segments/34859/2). 3.12 was released in Nov 2019; not sure when we've updated I believe this non-cached version should* at least return some numbers, however, these will be meaningless (as it would sum the hits/visits across a segment**, not just filtered to one item page). * it takes a lot of time to produce the result, so it may actually timeout. ** ie. it will be the visits/hits of all the pages a user, who visited repository item page in question, has visited Bitstream downloads include bots (this is probably due to #438, the user agent is not send by dspace). The numbers displayed in dspace might be meaningless anyway; as the piwik config currently has datatable_archiving_maximum_rows_subtable_actions = 100, see https://matomo.org/faq/how-to/faq_54/, in essence this means that if the item is not popular enough in particular year/month/week/day they will be archived in 'other' slot and won't be in the api response; this also means you might get different total numbers based on the period (y/m/w/d) you ask for (on the year level the "competition" for the available slots is much higher than on the day level). Maybe https://github.com/ufal/lindat_piwik_reports was meant to solve that, but does it?
2025-04-01T04:35:48.497085
2022-01-04T20:03:03
1093710271
{ "authors": [ "ufechner7" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:11754", "repo": "ufechner7/KiteModels.jl", "url": "https://github.com/ufechner7/KiteModels.jl/issues/10" }
gharchive/issue
Add 4 point Model Add the 4 point model to the code base. Add a type hierarchy that makes it possible to swap the active model easily. Work in progress: https://github.com/ufechner7/KiteModels.jl/pull/15 Closed by #15
2025-04-01T04:35:48.543296
2023-10-24T22:47:11
1960208583
{ "authors": [ "Phantom0110", "WillBAnders" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:11755", "repo": "ufssd/ufssd-website", "url": "https://github.com/ufssd/ufssd-website/issues/18" }
gharchive/issue
Add Officer section to the About page (card component & layout) Add an Officer section to the About page. This section contains a list of officers including their name, position, and picture, which will be controlled by a card component. Requirements Component: Each officer/position will be displayed in a card that should be a separate component (see sketch below). This should contain the position title at the top, then a picture, and then their name. The bottom row may contain any links and/or contact info such as personal website, GitHub, or LinkedIn. Layout: The cards should be displayed in a responsive grid (3 columns on desktop, 1 on mobile). The cards should transition smoothly across different widths. This is how the cards look.
2025-04-01T04:35:48.550459
2018-10-19T16:39:17
372032469
{ "authors": [ "deepakjois", "khaledhosny" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:11756", "repo": "ufyTeX/luaharfbuzz", "url": "https://github.com/ufyTeX/luaharfbuzz/pull/40" }
gharchive/pull-request
Some changes I’m experimenting with building luaharfbuzz into LuaTeX, here is some of the changes I made. I believe you have commit rights to this repo. So feel free to push this to master. I am a bit busy nowadays, but I might be able to find some time to do a release to LuaRocks, just to keep things consistent. OK, I wanted another eyeball. I’m also still changing stuff as I go with my experiment, so will keep this open until things settle down and then merge. I will look through the code later this week, and leave comments if I have any. A couple of things: If you are changing the names of public fields, (like the Drop redundant… commits), do update the luadoc file: https://github.com/ufyTeX/luaharfbuzz/blob/master/src/harfbuzz.luadoc I haven’t been following Harfbuzz development very closely, but it seems there is a 2.0 version out. Moreover, with the kind of changes you are making it would make sense to bump the major version. So, if/when I release, I would tag the release as luaharfbuzz 2.0. Makes sense? I did update the luadoc file for any name change (I also added aliases to the Lua file to not break any existing code). Version 2.0 sounds good. I’m still working on that though, I keep adding stuff as I continue exploring the integration with LuaTeX, so I’d hold on on merging and releasing for now since things are still evolving. Once the dust settles and may LuaTeX integration is working and more thoroughly tested, I might go more systematic and try to map all HarfBuzz API (at least what makes sense for the Lua module) and then version 2.0 would even be more appropriate. Ok sure. I won’t do anything until you commit to master. I think this PR is stable enough and can be merged if you don’t have any objections. I’ll work on another PR that adds more API in a systemic way (right now I was only adding stuff I have immediate need for). Go for it! I will try and find some time to do a release eventually, hopefully by the end of the year.
2025-04-01T04:35:48.556704
2024-05-16T07:07:36
2299535815
{ "authors": [ "mietcls", "nics" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:11757", "repo": "ugent-library/biblio-backoffice", "url": "https://github.com/ugent-library/biblio-backoffice/pull/1538" }
gharchive/pull-request
rename mutations;add remove_project mutation resolves #1512 natural mutation names (avoid yodaspeak) add remove_project mutation lock and unlock are replaced by set_locked,true|false for consistency (you win @mietcls) before this is put in production: update gitbook docs update biblio manual When I test with a faulty operation on line 1, the error says it could not process the last line instead of the first one: Error handling for locking does not return the "locked value must be 'true' or 'false'" error Now and then I get this error after trying to process, not sure why: @mietcls replaced by #1539 (also contains all the changes of this pr)
2025-04-01T04:35:48.563921
2023-01-03T13:26:23
1517394326
{ "authors": [ "flovogt", "marianfoo" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:11758", "repo": "ui5-community/bestofui5-website", "url": "https://github.com/ui5-community/bestofui5-website/pull/313" }
gharchive/pull-request
feat: enhance favicon support Enhancing the icon/favicon handling, the same way as https://github.com/SAP/openui5-website/pull/154 Thank you for the PR!
2025-04-01T04:35:48.629125
2023-09-06T07:22:14
1883409654
{ "authors": [ "CLAassistant", "ukff" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:11759", "repo": "ukff/btp-manager", "url": "https://github.com/ukff/btp-manager/pull/102" }
gharchive/pull-request
wip - temp commit msg Description Changes proposed in this pull request: ... ... ... Related issue(s) Thank you for your submission! We really appreciate it. Like many open source projects, we ask that you sign our Contributor License Agreement before we can accept your contribution.Your Name seems not to be a GitHub user. You need a GitHub account to be able to sign the CLA. If you have already a GitHub account, please add the email address used for this commit to your account.You have signed the CLA already but the status is still pending? Let us recheck it.
2025-04-01T04:35:48.632449
2019-02-08T22:58:53
408356503
{ "authors": [ "VibroAxe", "nyemenzo" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:11760", "repo": "uklans/cache-domains", "url": "https://github.com/uklans/cache-domains/pull/61" }
gharchive/pull-request
Update cache_domains.json What CDN does this PR relate to Crossfire Capture method Netlimiter Testing Scenario i haven't tested it yet because i dont have the knowledge to manually add a zone but i notice it in my limiter that give me this remote name. can you teach me how to manually add zone to the container? thank you. PS: I am from Philippines. i will edit the cdn after running wireshark. thank you. i will leave it blank first. it seems netlimiter and wireshark give diffrent host. Excuse my ignorance but what is crossfire? crossfire is a FPS game here in the Philippines. it is well known here and lots of players. here is now their official website since they upgrade from previous one http://forums.gameclub.ph/index.php?board=237.0 Thanks for the input but I'm afraid i'm going to have to reject this due to invalid configuration, crossfire.txt in json but not included in commit. Please feel free to reopen/resubmit with these changes resolved.
2025-04-01T04:35:48.658335
2020-08-12T15:05:15
677767863
{ "authors": [ "dmerejkowsky", "losnappas" ], "license": "Unlicense", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:11761", "repo": "ul/kak-lsp", "url": "https://github.com/ul/kak-lsp/issues/384" }
gharchive/issue
kak-lsp + flow not working in Javascript project Steps to reproduce Versions: kak 2020.08.04 kak-lsp 8.0.0 Config: [language.javascript] filetypes = ["javascript"] roots = ["package.json"] command = "yarn" args = ["flow", "lsp"] eval %sh{kak-lsp --kakoune -s $kak_session} set global lsp_cmd "kak-lsp -s %val{session} -vvv --log /tmp/kak-lsp.log" $ git clone https://github.com/tankerhq/sdk-js/ $ kak packages/core/src/Tanker.js # in kak :lsp-enable :lsp-capabilities gd Nothing gets printed, cursor does not move :/ Here are the logs: Aug 12 16:48:27.227 INFO Starting main event loop, module: kak_lsp::session:29 Aug 12 16:48:27.227 DEBG Searching for vars starting with KAK_LSP_PROJECT_ROOT_JAVASCRIPT, module: kak_lsp::project_root:41 Aug 12 16:48:27.228 DEBG Routing editor request to Route { session: "361948", language: "javascript", root: "/home/dmerej/src/tanker/sdk-js/packages/core" }, module: kak_lsp::session:95 Aug 12 16:48:27.228 DEBG Spawning a new controller for Route { session: "361948", language: "javascript", root: "/home/dmerej/src/tanker/sdk-js/packages/core" }, module: kak_lsp::session:117 Aug 12 16:48:27.228 INFO Starting Language server `yarn flow lsp`, module: kak_lsp::language_server_transport:21 Aug 12 16:48:27.229 DEBG To editor `361948`: eval -client client0 'lsp-get-server-initialization-options ''/tmp/kak-lsp/dmerej/d13af0fa36865e59''', module: kak_lsp::editor_transport:85 Aug 12 16:48:27.231 DEBG lsp_server_initialization_options: , module: kak_lsp::general:192 Aug 12 16:48:27.231 DEBG To server: {"jsonrpc":"2.0","method":"initialize","params":{"capabilities":{"textDocument":{"codeAction":{"codeActionLiteralSupport":{"codeActionKind":{"valueSet":["quickfix","refactor","refactor.extract","refactor.inline","refactor.rewrite","source","source.organizeImports"]}}},"colorProvider":null,"completion":{"completionItem":{"documentationFormat":["plaintext"],"snippetSupport":false}},"semanticHighlightingCapabilities":{"semanticHighlighting":true}},"workspace":{"workspaceEdit":{"documentChanges":true,"resourceOperations":["create","delete","rename"]}}},"processId":362127,"rootUri":"file:///home/dmerej/src/tanker/sdk-js/packages/core","trace":"off"},"id":0}, module: kak_lsp::language_server_transport:175 Aug 12 16:48:27.369 ERRO Failed to parse header, module: kak_lsp::language_server_transport:68 Aug 12 16:48:27.369 INFO Waiting for Messages to language server to finish..., module: kak_lsp::thread_worker:18 Aug 12 16:48:27.369 DEBG Received signal to stop language server, closing pipe, module: kak_lsp::language_server_transport:186 Aug 12 16:48:27.369 DEBG Waiting for language server process end, module: kak_lsp::language_server_transport:81 Aug 12 16:48:27.512 ERRO Language server error: Client fatal exception: End_of_file Raised at file "string.ml", line 114, characters 19-34 Called from file "sexp.ml", line 112, characters 13-47 --- Raised at file "map.ml", line 135, characters 10-25 Called from file "src/unix/lwt_unix.cppo.ml", line 2218, characters 6-37 error Command failed with exit code 1. , module: kak_lsp::language_server_transport:52 Aug 12 16:48:28.369 INFO ... Messages to language server terminated with ok, module: kak_lsp::thread_worker:20 Aug 12 16:48:28.369 INFO Waiting for Messages from language server to finish..., module: kak_lsp::thread_worker:18 Aug 12 16:48:28.369 INFO ... Messages from language server terminated with ok, module: kak_lsp::thread_worker:20 Aug 12 16:48:28.369 INFO Waiting for Language server errors to finish..., module: kak_lsp::thread_worker:18 Aug 12 16:48:28.369 INFO ... Language server errors terminated with ok, module: kak_lsp::thread_worker:20 Aug 12 16:49:00.000 DEBG From editor: session = "361948" client = "client0" buffile = "/home/dmerej/src/tanker/sdk-js/packages/core/src/Tanker.js" filetype = "javascript" version = 1 method = "capabilities" [params] , module: kak_lsp::editor_transport:125 Aug 12 16:49:00.000 DEBG Searching for vars starting with KAK_LSP_PROJECT_ROOT_JAVASCRIPT, module: kak_lsp::project_root:41 Aug 12 16:49:00.000 DEBG Routing editor request to Route { session: "361948", language: "javascript", root: "/home/dmerej/src/tanker/sdk-js/packages/core" }, module: kak_lsp::session:95 Aug 12 16:49:00.000 INFO Waiting for Controller to finish..., module: kak_lsp::thread_worker:18 Aug 12 16:49:00.000 INFO ... Controller terminated with ok, module: kak_lsp::thread_worker:20 Aug 12 16:49:00.000 ERRO Failed to send message to controller, module: kak_lsp::session:105 Aug 12 16:49:15.897 DEBG From editor: session = "361948" client = "client0" buffile = "/home/dmerej/src/tanker/sdk-js/packages/core/src/Tanker.js" filetype = "javascript" version = 1 method = "textDocument/definition" [params.position] line = 56 column = 46 , module: kak_lsp::editor_transport:125 Aug 12 16:49:15.898 DEBG Searching for vars starting with KAK_LSP_PROJECT_ROOT_JAVASCRIPT, module: kak_lsp::project_root:41 Aug 12 16:49:15.898 DEBG Routing editor request to Route { session: "361948", language: "javascript", root: "/home/dmerej/src/tanker/sdk-js/packages/core" }, module: kak_lsp::session:95 Aug 12 16:49:15.898 DEBG Spawning a new controller for Route { session: "361948", language: "javascript", root: "/home/dmerej/src/tanker/sdk-js/packages/core" }, module: kak_lsp::session:117 Aug 12 16:49:15.899 INFO Starting Language server `yarn flow lsp`, module: kak_lsp::language_server_transport:21 Aug 12 16:49:15.902 DEBG To editor `361948`: eval -client client0 'lsp-get-server-initialization-options ''/tmp/kak-lsp/dmerej/78813850d3e573b0''', module: kak_lsp::editor_transport:85 Aug 12 16:49:15.908 DEBG lsp_server_initialization_options: , module: kak_lsp::general:192 Aug 12 16:49:15.909 DEBG To server: {"jsonrpc":"2.0","method":"initialize","params":{"capabilities":{"textDocument":{"codeAction":{"codeActionLiteralSupport":{"codeActionKind":{"valueSet":["quickfix","refactor","refactor.extract","refactor.inline","refactor.rewrite","source","source.organizeImports"]}}},"colorProvider":null,"completion":{"completionItem":{"documentationFormat":["plaintext"],"snippetSupport":false}},"semanticHighlightingCapabilities":{"semanticHighlighting":true}},"workspace":{"workspaceEdit":{"documentChanges":true,"resourceOperations":["create","delete","rename"]}}},"processId":362127,"rootUri":"file:///home/dmerej/src/tanker/sdk-js/packages/core","trace":"off"},"id":0}, module: kak_lsp::language_server_transport:175 Aug 12 16:49:16.075 ERRO Failed to parse header, module: kak_lsp::language_server_transport:68 Aug 12 16:49:16.075 INFO Waiting for Messages to language server to finish..., module: kak_lsp::thread_worker:18 Aug 12 16:49:16.075 DEBG Received signal to stop language server, closing pipe, module: kak_lsp::language_server_transport:186 Aug 12 16:49:16.075 DEBG Waiting for language server process end, module: kak_lsp::language_server_transport:81 Aug 12 16:49:16.223 ERRO Language server error: Client fatal exception: End_of_file Raised at file "string.ml", line 114, characters 19-34 Called from file "sexp.ml", line 112, characters 13-47 --- Raised at file "map.ml", line 135, characters 10-25 Called from file "src/unix/lwt_unix.cppo.ml", line 2218, characters 6-37 error Command failed with exit code 1. , module: kak_lsp::language_server_transport:52 Aug 12 16:49:17.075 INFO ... Messages to language server terminated with ok, module: kak_lsp::thread_worker:20 Aug 12 16:49:17.075 INFO Waiting for Messages from language server to finish..., module: kak_lsp::thread_worker:18 Aug 12 16:49:17.075 INFO ... Messages from language server terminated with ok, module: kak_lsp::thread_worker:20 Aug 12 16:49:17.075 INFO Waiting for Language server errors to finish..., module: kak_lsp::thread_worker:18 Aug 12 16:49:17.075 INFO ... Language server errors terminated with ok, module: kak_lsp::thread_worker:20 Aug 12 16:50:00.528 INFO Starting main event loop, module: kak_lsp::session:29 Aug 12 16:50:00.529 INFO Shutting down language servers and exiting, module: kak_lsp::session:180 Aug 12 16:50:00.529 INFO Waiting for Messages to editor to finish..., module: kak_lsp::thread_worker:18 Aug 12 16:50:00.529 INFO ... Messages to editor terminated with ok, module: kak_lsp::thread_worker:20 Aug 12 16:50:00.539 INFO Starting main event loop, module: kak_lsp::session:29 Aug 12 16:50:00.540 DEBG Searching for vars starting with KAK_LSP_PROJECT_ROOT_JAVASCRIPT, module: kak_lsp::project_root:41 Aug 12 16:50:00.540 DEBG Routing editor request to Route { session: "361948", language: "javascript", root: "/home/dmerej/src/tanker/sdk-js/packages/core" }, module: kak_lsp::session:95 Aug 12 16:50:11.053 INFO Starting main event loop, module: kak_lsp::session:29 Aug 12 16:50:11.054 DEBG Searching for vars starting with KAK_LSP_PROJECT_ROOT_JAVASCRIPT, module: kak_lsp::project_root:41 Aug 12 16:50:11.055 DEBG Routing editor request to Route { session: "362392", language: "javascript", root: "/home/dmerej/src/tanker/sdk-js/packages/core" }, module: kak_lsp::session:95 Aug 12 16:50:11.055 DEBG Spawning a new controller for Route { session: "362392", language: "javascript", root: "/home/dmerej/src/tanker/sdk-js/packages/core" }, module: kak_lsp::session:117 Aug 12 16:50:11.055 INFO Starting Language server `yarn flow lsp`, module: kak_lsp::language_server_transport:21 Aug 12 16:50:11.058 DEBG To editor `362392`: eval -client client0 'lsp-get-server-initialization-options ''/tmp/kak-lsp/dmerej/19b3f39290c78f89''', module: kak_lsp::editor_transport:85 Aug 12 16:50:11.064 DEBG lsp_server_initialization_options: , module: kak_lsp::general:192 Aug 12 16:50:11.064 DEBG To server: {"jsonrpc":"2.0","method":"initialize","params":{"capabilities":{"textDocument":{"codeAction":{"codeActionLiteralSupport":{"codeActionKind":{"valueSet":["quickfix","refactor","refactor.extract","refactor.inline","refactor.rewrite","source","source.organizeImports"]}}},"colorProvider":null,"completion":{"completionItem":{"documentationFormat":["plaintext"],"snippetSupport":false}},"semanticHighlightingCapabilities":{"semanticHighlighting":true}},"workspace":{"workspaceEdit":{"documentChanges":true,"resourceOperations":["create","delete","rename"]}}},"processId":362577,"rootUri":"file:///home/dmerej/src/tanker/sdk-js/packages/core","trace":"off"},"id":0}, module: kak_lsp::language_server_transport:175 Aug 12 16:50:11.229 ERRO Failed to parse header, module: kak_lsp::language_server_transport:68 Aug 12 16:50:11.229 INFO Waiting for Messages to language server to finish..., module: kak_lsp::thread_worker:18 Aug 12 16:50:11.230 DEBG Received signal to stop language server, closing pipe, module: kak_lsp::language_server_transport:186 Aug 12 16:50:11.230 DEBG Waiting for language server process end, module: kak_lsp::language_server_transport:81 Aug 12 16:50:11.374 ERRO Language server error: Client fatal exception: End_of_file Raised at file "string.ml", line 114, characters 19-34 Called from file "sexp.ml", line 112, characters 13-47 --- Raised at file "map.ml", line 135, characters 10-25 Called from file "src/unix/lwt_unix.cppo.ml", line 2218, characters 6-37 error Command failed with exit code 1. , module: kak_lsp::language_server_transport:52 Aug 12 16:50:12.230 INFO ... Messages to language server terminated with ok, module: kak_lsp::thread_worker:20 Aug 12 16:50:12.230 INFO Waiting for Messages from language server to finish..., module: kak_lsp::thread_worker:18 Aug 12 16:50:12.230 INFO ... Messages from language server terminated with ok, module: kak_lsp::thread_worker:20 Aug 12 16:50:12.230 INFO Waiting for Language server errors to finish..., module: kak_lsp::thread_worker:18 Aug 12 16:50:12.230 INFO ... Language server errors terminated with ok, module: kak_lsp::thread_worker:20 Aug 12 16:50:14.335 DEBG From editor: session = "362392" client = "client0" buffile = "/home/dmerej/src/tanker/sdk-js/packages/core/src/Tanker.js" filetype = "javascript" version = 1 method = "capabilities" [params] , module: kak_lsp::editor_transport:125 Aug 12 16:50:14.335 DEBG Searching for vars starting with KAK_LSP_PROJECT_ROOT_JAVASCRIPT, module: kak_lsp::project_root:41 Aug 12 16:50:14.335 DEBG Routing editor request to Route { session: "362392", language: "javascript", root: "/home/dmerej/src/tanker/sdk-js/packages/core" }, module: kak_lsp::session:95 Aug 12 16:50:14.336 INFO Waiting for Controller to finish..., module: kak_lsp::thread_worker:18 Aug 12 16:50:14.336 INFO ... Controller terminated with ok, module: kak_lsp::thread_worker:20 Aug 12 16:50:14.336 ERRO Failed to send message to controller, module: kak_lsp::session:105 Aug 12 16:59:48.575 INFO Starting main event loop, module: kak_lsp::session:29 Aug 12 16:59:48.576 INFO Shutting down language servers and exiting, module: kak_lsp::session:180 Aug 12 16:59:48.576 INFO Waiting for Messages to editor to finish..., module: kak_lsp::thread_worker:18 Aug 12 16:59:48.576 INFO ... Messages to editor terminated with ok, module: kak_lsp::thread_worker:20 Aug 12 17:01:45.667 DEBG From editor: session = "362392" client = "" buffile = "" filetype = "" version = 0 method = "stop" [params] , module: kak_lsp::editor_transport:125 Aug 12 17:01:45.667 INFO Shutting down language servers and exiting, module: kak_lsp::session:180 Aug 12 17:01:45.667 INFO Waiting for Messages to editor to finish..., module: kak_lsp::thread_worker:18 Aug 12 17:01:45.667 INFO ... Messages to editor terminated with ok, module: kak_lsp::thread_worker:20 ps shows both kak-lsp and flow processes running Please tell me if you need more info Update : this does not happen in a trivial project containing just one file. Maybe because TankerHQ/sdk-js uses yarn workspaces ?
2025-04-01T04:35:48.662232
2024-09-30T03:00:40
2555417534
{ "authors": [ "Kunlun-Zhu", "lwaekfjlk" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:11762", "repo": "ulab-uiuc/research-town", "url": "https://github.com/ulab-uiuc/research-town/pull/723" }
gharchive/pull-request
[Feature]add Research Bench Evaluation pipeline Closes # 📑 Description ✅ Checks [x] My pull request adheres to the code style of this project [x] My code requires changes to the documentation [ ] I have updated the documentation as required [ ] All the tests have passed [x] Branch name follows type/descript (e.g. feature/add-llm-agents) [x] Ready for code review ℹ Additional Information @lwaekfjlk need to support rag-free mode in proposal writing
2025-04-01T04:35:48.889292
2019-06-14T14:14:32
456271495
{ "authors": [ "Elarnon" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:11763", "repo": "ulysseB/telamon", "url": "https://github.com/ulysseB/telamon/pull/262" }
gharchive/pull-request
Proper into_num_set() default impl The default implementation for into_num_set() was erroneously ignoring the last available value. In particular it would transform a singleton set into the empty set. This makes it so that the last value is properly accounted for. bors r=andidr
2025-04-01T04:35:48.890718
2024-12-03T14:08:26
2715133214
{ "authors": [ "YaseenSadat", "aneeqmuh" ], "license": "CC0-1.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:11764", "repo": "umairx25/StockFlow", "url": "https://github.com/umairx25/StockFlow/pull/33" }
gharchive/pull-request
Debug: Update README with Patches and Improvements This pull request updates the README file with the latest patches and enhancements. The changes include detailed explanations of recent updates, formatting improvements, and added clarity to usage instructions. These changes aim to enhance readability and provide a better experience for developers and users. I carefully reviewed the updates made to the README file and found the changes to be well-documented and valuable. The detailed explanations of the latest patches, improved formatting, and clearer usage instructions enhance readability and make it easier for developers and users to understand and utilize the project.
2025-04-01T04:35:48.891644
2018-10-22T15:51:54
372579279
{ "authors": [ "MGRagab", "rolgalan" ], "license": "apache-2.0", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:11765", "repo": "umano/AndroidSlidingUpPanel", "url": "https://github.com/umano/AndroidSlidingUpPanel/pull/946" }
gharchive/pull-request
fix issue of canvas.save method Updating gradle version , updating build tools version and compile SDK …version also fix issue of canvas.save method Amazing! I also need this in one of my projects to update it. Big thanks!
2025-04-01T04:35:48.895508
2020-01-20T14:59:10
552351670
{ "authors": [ "enkelmedia", "stefankip" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:11766", "repo": "umbraco-community/Our.Umbraco.OpeningHours", "url": "https://github.com/umbraco-community/Our.Umbraco.OpeningHours/pull/38" }
gharchive/pull-request
Solves #37 - Support for Umbraco 8 This PR adds support for Umbraco 8.4+ (since they changed the value converter-implementation I choose to go with the latest version). We've also removed external dependencies to make the package it own "unit". I have not updated any meta data for release/build for nuget/our.umbraco and not changed any of the markdown for readme etc. Let me know if you would like any changes Hi! Sorry for late reply, Project Format I don't have the time at the moment to change this, since it works with the current format I guess that this might be more "nice to have" than "must have" =D Might be something to look at in the future. Assembly conflicts Not sure either, I noticed them but they did not have any impact on the artifacts so I did not spend to much time trying to solve it. DatePicker That's true! Not sure why I missed that, I would have to look closer at that Colors Sounds fair Skybrud.Essentials I do see that point that your're making, when I did the rewrite I was only considering the fact that it was a small part of the Essentials-dependency that was used and figured that it would be a good idea to reduce the number of dependencies and also to avoid the issue that you're describing with different versions of the DLL - basically making it a "own working unit", also making the code-base "complete" in the way that there is no external thing that out of control for the package that is used. But, as you say, it's a potential nightmare having to fix a but in 10 different packages over just updating the dependency. If you would prefer keeping the dependency on Skybrud.Essentials I could just add it back again? =D // m Too bad this PR sort of 'died'.
2025-04-01T04:35:48.910438
2017-01-20T15:18:33
202165212
{ "authors": [ "Shazwazza", "biapar" ], "license": "mit", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:11767", "repo": "umbraco/UmbracoRestApi", "url": "https://github.com/umbraco/UmbracoRestApi/issues/18" }
gharchive/issue
Where is the code to authenticate and authorize users? Where is the code to authenticate and authorize users? Simple question that doesn't have a simple answer. See: https://github.com/umbraco/UmbracoRestApi/issues/2 and then: https://github.com/umbraco/UmbracoRestApi/issues/17 The authorization is done by an attribute [UmbracoAuthorize], the authentication is done by either the token auth server built in to IdentityExtensions, or if you've enabled cookie authentication. https://our.umbraco.org/Documentation/Implementation/Rest-Api/
2025-04-01T04:35:48.918622
2014-05-31T10:33:08
34703917
{ "authors": [ "adam-lynch", "bebraw" ], "license": "mit", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:11768", "repo": "umdjs/umd", "url": "https://github.com/umdjs/umd/issues/40" }
gharchive/issue
Testing The Grunt example linked in the readme isn't very clear for me. I can only assume that if it wasn't properly UMD, then loads of the underlying tests would fail, which isn't ideal really. It would be great if there were solid tests (in this / a separate project) for testing if everything is exported correctly, etc. Having tests properly documented, maintained and used would only increase confidence. Then there could be build system specific plugins / projects which just wrap / use these tests (perfect for CI). What do you think? I'd love to contribute but I wouldn't be 100% sure what exactly sure be tested and what all the edge cases are. @adam-lynch I have some preliminary Saucelabs tests at libumd. I am certain the tests can be improved a lot but even this is better than nothing and can be built upon if there is interest.
2025-04-01T04:35:48.979854
2023-08-18T10:39:11
1856493405
{ "authors": [ "jhalmen" ], "license": "BSD-3-Clause", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:11769", "repo": "umit-iace/tool-libs", "url": "https://github.com/umit-iace/tool-libs/pull/76" }
gharchive/pull-request
I2C polling This patchset introduces a public poll method on the I2C interface that must be called regularly to check for stalled communication and initialize transfers. do something like k.every(1, [](uint32_t, uint32_t) { i2cbus.poll(); }); I feel like #90 is the fix i've been looking for, but this is probably a feature we should consider anyways
2025-04-01T04:35:48.984526
2019-09-08T18:15:27
490785391
{ "authors": [ "akosourov", "umputun" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:11770", "repo": "umputun/remark", "url": "https://github.com/umputun/remark/pull/431" }
gharchive/pull-request
Change url in comments, decorate approach This is still wip. It shows possible decorate approach for import reader. https://github.com/umputun/remark/issues/412#issuecomment-526948747 Generally I like this PR. It goes the right direction, easy to read and easy to reason about. I've finished and fixed things what you noted. I've tested remap procedure by both command and api call running remark container which serves single siteID. I want to note about prefix search. Do we want ability to merge comments from several ulrs to single? For example if remark keeps several urls (posts) https://site.com/p/1 https://site.com/p/2 and you want to merge them to single https://site.com/merged/ current realisation of prefix search won't help. I mean if you set rules like https://site.com/p/* https://site.com/merged/ the result will be https://site.com/merged/1 https://site.com/merged/2 Do we want ability to merge comments from several ulrs to single? I think we can live without it. The practical use case for prefix mapping I had in mind are: change http:// to https:// change domain, i.e www.blah.com to blah.com Anything outside of those can be handled by user-provided mapping file for concrete urls I've fixed naming a little and after it drone build failed (don't see problems in that changes) looks like a github-related issue- fatal: The remote end hung up unexpectedly LGTM
2025-04-01T04:35:49.008443
2018-02-17T14:23:45
298008284
{ "authors": [ "AlexTr", "timreeves" ], "license": "mit", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:11771", "repo": "unaio/una", "url": "https://github.com/unaio/una/issues/1125" }
gharchive/issue
BxDolStudioToolsAudit.php file_get_contents() to php.net can hang forever In "protected function requirementsPHP()" there is this line: file_get_contents("https://secure.php.net/releases/index.php?serialize=1"); This is fine in itself BUT... Currently the php.net website has an IPv6 assigned which is not reachable. Most systems revert to IPv4 which works, as expected. But PHP (all versions) on Debian 8 hangs forever, the installation of UNA is not possible on Debian 8. See: https://bugs.php.net/bug.php?id=75974 However, it would be prudent to protect your call with a timeout and error message. I implemented a workaround using Curl, here the code: `function http_get_contents($url, $opts = []) { $ch = curl_init(); curl_setopt($ch, CURLOPT_TIMEOUT, 5); curl_setopt($ch, CURLOPT_USERAGENT, "{$_SERVER['SERVER_NAME']}"); curl_setopt($ch, CURLOPT_URL, $url); if(is_array($opts) && $opts) { foreach($opts as $key => $val) { curl_setopt($ch, $key, $val); } } curl_setopt($ch, CURLOPT_RETURNTRANSFER, true); if(FALSE === ($retval = curl_exec($ch))) { error_log(curl_error($ch)); } else { return $retval; } } $curlopts = [ CURLOPT_IPRESOLVE => CURL_IPRESOLVE_V4 ]; $s = http_get_contents('http://www.php.net/releases/index.php?serialize=1', $curlopts); ` Perhaps it could be useful to incorporate the "http_get_contents" function into UNA core utilities and use it, perhaps in other places where users might be able to have contents inserted from a URI into their web pages for example (it does'nt always have to be an iframe...). And Curl seems more robust or at least better configurable then PHP streams. Cheers, Tim Thank you @timreeves Actually we have bx_file_get_contents function in inc/utils.inc.php which is using curl by default. Thanks @AlexTr - I have modified my patch as follows: $aParams = array(); $aHeaders = array(); $sHttpCode = null; $aBasicAuth = array(); $curlopts = [ CURLOPT_IPRESOLVE => CURL_IPRESOLVE_V4 ]; $s = bx_file_get_contents('http://www.php.net/releases/index.php?serialize=1', $aParams, 'get', $aHeaders, $sHttpCode, $aBasicAuth, 20, $curlopts); This works for me - thanks for the tip! See update to bug notice at php.net. Since only www.php.net has an IPv6 address, but not php.net (without www.) it would solve the problem just to remove the "www." from the URL. Thank you @timreeves
2025-04-01T04:35:49.106373
2019-03-19T11:41:04
422685538
{ "authors": [ "codecov-io", "scala-steward" ], "license": "apache-2.0", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:11772", "repo": "underscoreio/slickless", "url": "https://github.com/underscoreio/slickless/pull/36" }
gharchive/pull-request
Update scalatest to 3.0.7 Updates org.scalatest:scalatest from 3.0.6 to 3.0.7. I'll automatically update this PR to resolve conflicts as long as you don't change it yourself. If you'd like to skip this version, you can just close this PR. If you have any feedback, just mention @scala-steward in the comments below. Have a nice day! Ignore future updates Add this to your .scala-steward.conf file to ignore future updates of this dependency: updates.ignore = [{ groupId = "org.scalatest", artifactId = "scalatest" }] Codecov Report Merging #36 into master will not change coverage. The diff coverage is n/a. @@ Coverage Diff @@ ## master #36 +/- ## ======================================= Coverage 93.75% 93.75% ======================================= Files 1 1 Lines 16 16 ======================================= Hits 15 15 Misses 1 1 Continue to review full report at Codecov. Legend - Click here to learn more Δ = absolute <relative> (impact), ø = not affected, ? = missing data Powered by Codecov. Last update a82c010...1ead34d. Read the comment docs.
2025-04-01T04:35:49.220040
2023-01-23T12:58:50
1553051631
{ "authors": [ "istride" ], "license": "BSD-2-Clause", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:11776", "repo": "unicef/iogt", "url": "https://github.com/unicef/iogt/issues/1514" }
gharchive/issue
load_po_files management command is slow We have created a management command (load_po_files) that loads translation strings from PO files in the project into an app database via translation_manager.manager.Manager.load_data_from_po. We have also patched the provided Manager so that existing translation strings are not overwritten in the database. It is intended that load_po_files be run every time the app is upgraded, and I have found that this makes the upgrade process quite slow, especially, as in our case, when there may be 40+ deployments to upgrade at the same time. In addition, I suspect that load_po_files puts the database under quite a lot of stress, and when the database is shared between several deployments, this makes me nervous about upgrading deployments simultaneously, in case the database is swamped and becomes unresponsive. I have very roughly and unscientifically timed the original manager and the patched manager - using the time command on my development machine with an sqlite database. From a fresh database there is little difference, but there is a significant drop in performance with the patched manager after the first invocation of load_po_files. After a brief investigation, I found that the patched manager is issuing many requests to individually create TranslationEntry, which could be issued in bulk. I have made amendments to the patched manager to improve its performance, and the results can be seen below. The result is a manager that loads PO files faster than the original manager with no difference between invocations on a fresh database vs ongoing. Original Manager From fresh database ./manage.py load_po_files 11.09s user 1.97s system 28% cpu 45.314 total ./manage.py load_po_files 11.59s user 2.00s system 20% cpu 1:06.19 total ./manage.py load_po_files 11.45s user 2.08s system 20% cpu 1:05.85 total ./manage.py load_po_files 11.78s user 1.92s system 21% cpu 1:04.11 total After first invocation ./manage.py load_po_files 6.98s user 0.29s system 93% cpu 7.743 total ./manage.py load_po_files 6.50s user 0.28s system 93% cpu 7.245 total ./manage.py load_po_files 6.26s user 0.22s system 93% cpu 6.964 total ./manage.py load_po_files 6.38s user 0.21s system 93% cpu 7.069 total Patched Manager From fresh database ./manage.py load_po_files 15.45s user 1.98s system 34% cpu 50.859 total ./manage.py load_po_files 15.23s user 1.93s system 34% cpu 50.465 total ./manage.py load_po_files 17.72s user 2.31s system 29% cpu 1:07.30 total ./manage.py load_po_files 16.00s user 2.24s system 34% cpu 53.139 total After first invocation ./manage.py load_po_files 10.40s user 0.80s system 50% cpu 22.068 total ./manage.py load_po_files 10.28s user 0.83s system 38% cpu 28.928 total ./manage.py load_po_files 10.05s user 0.82s system 39% cpu 27.756 total ./manage.py load_po_files 10.21s user 0.84s system 39% cpu 28.124 total Faster Patched Manager From fresh database ./manage.py load_po_files 2.98s user 0.21s system 75% cpu 4.238 total ./manage.py load_po_files 3.14s user 0.23s system 76% cpu 4.418 total ./manage.py load_po_files 3.06s user 0.24s system 76% cpu 4.328 total ./manage.py load_po_files 3.15s user 0.22s system 76% cpu 4.407 total After first invocation ./manage.py load_po_files 3.03s user 0.23s system 76% cpu 4.269 total ./manage.py load_po_files 3.08s user 0.21s system 77% cpu 4.257 total ./manage.py load_po_files 3.23s user 0.20s system 78% cpu 4.377 total ./manage.py load_po_files 3.07s user 0.17s system 76% cpu 4.255 total @cbunicef FYI, this issue will be resolved by a PR that has been approved to be merged and I would like it to go into the next release. The intention is to significantly speed up the process of upgrading existing deployments without changing how translations are stored in the db.
2025-04-01T04:35:49.282780
2019-08-14T14:28:16
480711591
{ "authors": [ "bmesuere", "pdawyndt" ], "license": "mit", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:11777", "repo": "unipept/unipept", "url": "https://github.com/unipept/unipept/issues/267" }
gharchive/issue
unipept pept2 tools should not reorder and/or modify peptides Running the unipept pept2... tools changes the order of the peptides read from the input. In case the -e option is used, the tools also modify the peptides as read from the input to a normalized version. Both reordering and modification should not happen. The modification might happen with unipept pept2prot -e, but then only with a specific option and the modified version of the peptide should in this case be the version of the peptide as it is found in the identified protein. This might be confusing in rare cases where multiple versions of the same peptide match the same protein though. I also don't know if the database has a fast way of identifying what version of the peptide matches a given protein in case I and L are equated. Example: $ fasta2prot < patient_TS19_CDS | sed -ne '10{p;q}' | prot2pept | peptfilter YFGHIFSDEDK NLLETGNMGR VVELK NGEYIPSFISIDK LTNEVVAMK AENAFIPR GVELTEQLXTR NEGIGK AHEGI $ fasta2prot < patient_TS19_CDS | sed -ne '10{p;q}' | prot2pept | peptfilter | unipept pept2lca sequence,taxon_id,taxon_name,taxon_parent_id,taxon_rank VVELK,1,root,1,no rank NLLETGNMGR,976,Bacteroidetes,68336,phylum LTNEVVAMK,171549,Bacteroidales,200643,order $ fasta2prot < patient_TS19_CDS | sed -ne '10{p;q}' | prot2pept | peptfilter | unipept pept2lca -e sequence,taxon_id,taxon_name,taxon_parent_id,taxon_rank NEGLGK,1,root,1,no rank VVELK,1,root,1,no rank AENAFLPR,817,Bacteroides fragilis,816,species NGEYLPSFLSLDK,817,Bacteroides fragilis,816,species NLLETGNMGR,976,Bacteroidetes,68336,phylum LTNEVVAMK,171549,Bacteroidales,200643,order Original issue by @pdawyndt on Sat May 03 2014 at 10:28. Closed by an unknown user on Tue May 06 2014 at 15:01. I've fixed the renaming of input peptides and also the order for a single batch. However because multiple requests are queued there is no way to guarantee the order of batches. I'm going to implement something that can keep track of the batch order and will output them in the correct order. Original comment by ghost on Tue May 06 2014 at 14:01. This is fixed in unipept 0.3.1. Original comment by ghost on Tue May 06 2014 at 15:01.
2025-04-01T04:35:49.286531
2022-03-10T09:27:48
1164969311
{ "authors": [ "vtolstov" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:11778", "repo": "unistack-org/micro-client-http", "url": "https://github.com/unistack-org/micro-client-http/pull/66" }
gharchive/pull-request
add additional wrappers support Signed-off-by: Vasiliy Tolstov<EMAIL_ADDRESS> close #65
2025-04-01T04:35:49.310189
2022-10-04T08:13:37
1395845203
{ "authors": [ "Scarjit" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:11779", "repo": "united-manufacturing-hub/united-manufacturing-hub", "url": "https://github.com/united-manufacturing-hub/united-manufacturing-hub/issues/1284" }
gharchive/issue
Check kafka-to-blob & kafka-to-postgresql for required updates Is your feature request related to a problem? Please describe. The new datamodel might require changes to kafka-to-blob and kafka-to-postgresql Ex: addOrder -> addJob kafka-to-postgresql should only run, if the migrationtable exists and the last migration was to the current deployed version Closed, as we need to agree on new datamodel first
2025-04-01T04:35:49.311657
2017-02-28T19:33:17
210883091
{ "authors": [ "joelcollinsdc", "konklone", "thinkcontext" ], "license": "cc0-1.0", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:11780", "repo": "unitedstates/congress-legislators", "url": "https://github.com/unitedstates/congress-legislators/pull/445" }
gharchive/pull-request
run wikidata_bioguide.py Added getting ballotpedia from wikidata to the script and ran it. Picked up several ballotpedia ids, updated a few renamed wikipedia articles and added a mess of google entity ids. Well, I can't see why the Travis build failed, perhaps because of the S3 outage or a second-order outage, but would you mind fixing the build before we merge? You can replicate it by running the commands in .travis.yml. Restarted the build and it passed.
2025-04-01T04:35:49.335273
2023-04-28T14:05:26
1688623517
{ "authors": [ "ManasMadrecha", "zsilbi" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:11781", "repo": "unjs/nitro", "url": "https://github.com/unjs/nitro/issues/1198" }
gharchive/issue
SWR responses cached forever in CloudFlare Workers Environment CloudFlare Workers Nuxt 3.4.1 nitro: { storage: { cache: { driver: "cloudflare-kv-binding", }, }, }, Reproduction export default cachedEventHandler( async () => { // Test subrequest return await $fetch( "https://timeapi.io/api/Time/current/zone?timeZone=Europe/Budapest" ); }, { swr: true, maxAge: 60 } ); deployed via Wrangler Describe the bug The initial request is cached forever. The expires time is 15 minutes ago from now as I am writing this. I tested it upto 2 days. It works on local Wrangler dev perfectly. It also works on CloudFlare Workers with swr: false. Additional context The initial response: { "year":2023, "month":4, "day":28, "hour":15, "minute":47, "seconds":6, "milliSeconds":370, "dateTime":"2023-04-28T15:47:06.370526", "date":"04/28/2023", "time":"15:47", "timeZone":"Europe/Budapest", "dayOfWeek":"Friday", "dstActive":true } Cache file from KV: { "expires":1682689686245, "value":{ "code":200, "headers":{ "etag":"W/\"z7FJBuCozK\"", "last-modified":"Fri, 28 Apr 2023 13:47:06 GMT", "cache-control":"s-maxage=60, stale-while-revalidate=15" }, "body":{ "year":2023, "month":4, "day":28, "hour":15, "minute":47, "seconds":6, "milliSeconds":370, "dateTime":"2023-04-28T15:47:06.370526", "date":"04/28/2023", "time":"15:47", "timeZone":"Europe/Budapest", "dayOfWeek":"Friday", "dstActive":true } }, "mtime":1682689626377, "integrity":"MVNRfu1EFA" } Logs No response Is this solved? Is this solved? Yes, it was fixed by this commit: https://github.com/unjs/nitro/commit/421d6255ea4633753101ccaac21a8b89726d0c01 Thank you!
2025-04-01T04:35:49.344352
2019-06-26T17:11:47
461087447
{ "authors": [ "unleashed" ], "license": "apache-2.0", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:11782", "repo": "unleashed/threescalers", "url": "https://github.com/unleashed/threescalers/pull/28" }
gharchive/pull-request
Update trust CI system Update the travis & appveyor templates + ci scripts to latest master in trust. Note that tests might fail while I tune this. And also: With the update avoids installing rust twice in newer images from Travis. Ensures rustfmt is installed via rustup. Correctly limits checks with clippy and rustfmt to nightly. Tests the two windows toolchains in appveyor. appveyor takes ages :/ oh, I had set up auto-dismissal :D - just made MSVC be the only build on appveyor merging as r=@davidor and the CI has been tested quite a bit already.
2025-04-01T04:35:49.375795
2019-11-08T20:46:00
520225994
{ "authors": [ "manushah17", "sirichandana95" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:11783", "repo": "uno-isqa-8950/uno-cpi", "url": "https://github.com/uno-isqa-8950/uno-cpi/pull/1688" }
gharchive/pull-request
Siri sprint Modified Project/Views.py, projects/Checkproject.html, projects/templates/projects/createProject.html, projects/urls.py , UnoCPI/sqlfiles.py for search improvements in projects merged other than createProject.html. Please rebase the createProject.html from sprint_4 before incorporating your changes, createProject.html has been changed a lot for creating project
2025-04-01T04:35:49.391173
2023-07-17T14:28:24
1808007898
{ "authors": [ "Marc-Antoine-Soucy", "kazo0" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:11784", "repo": "unoplatform/uno.toolkit.ui", "url": "https://github.com/unoplatform/uno.toolkit.ui/issues/658" }
gharchive/issue
[NavigationBar] Multiple NavigationBar causes crash on WinUI Current behavior If you have NavigationBar on two different tabs (ie nested on a single page) when you switch tabs you'll see an exception on winUI Expected behavior Navigation between two tabs should work How to reproduce it (as minimally and precisely as possible) Navigate to one of the two tabs, notice if both home and saved have a NavigationBar, the app crashes when you try to navigate from one to the next. unoExtensionBasicApp.zip (Uncomment the NavigationBar on Home before running, and then switch tabs) Environment Nuget Package: <PackageVersion Include="Uno.Extensions.Configuration" Version="3.0.0-dev.1960" /> <PackageVersion Include="Uno.Extensions.Hosting" Version="2.4.2" /> <PackageVersion Include="Uno.Extensions.Hosting.WinUI" Version="3.0.0-dev.1960" /> <PackageVersion Include="Uno.Extensions.Http" Version="3.0.0-dev.1960" /> <PackageVersion Include="Uno.Extensions.Http.Refit" Version="3.0.0-dev.1960" /> <PackageVersion Include="Uno.Extensions.Localization" Version="2.4.2" /> <PackageVersion Include="Uno.Extensions.Localization.WinUI" Version="3.0.0-dev.1960" /> <PackageVersion Include="Uno.Extensions.Logging.OSLog" Version="1.6.0-dev.2" /> <PackageVersion Include="Uno.Extensions.Logging.WebAssembly.Console" Version="1.6.0-dev.2" /> <PackageVersion Include="Uno.Extensions.Logging.WinUI" Version="3.0.0-dev.1960" /> <PackageVersion Include="Uno.Extensions.Navigation" Version="2.4.2" /> <PackageVersion Include="Uno.Extensions.Navigation.WinUI" Version="3.0.0-dev.1960" /> <PackageVersion Include="Uno.Extensions.Navigation.Toolkit.WinUI" Version="3.0.0-dev.1960" /> <PackageVersion Include="Uno.Extensions.Reactive" Version="2.4.2" /> <PackageVersion Include="Uno.Extensions.Reactive.WinUI" Version="3.0.0-dev.1960" /> <PackageVersion Include="Uno.Extensions.Serialization" Version="2.4.2" /> <PackageVersion Include="Uno.Extensions.Serialization.Http" Version="3.0.0-dev.1960" /> <PackageVersion Include="Uno.Extensions.Serialization.Refit" Version="3.0.0-dev.1960" /> <PackageVersion Include="Uno.Material.WinUI" Version="3.0.0-dev.262" /> Package Version(s): Affected platform(s): [ ] iOS [ ] Android [ ] WebAssembly [ ] WebAssembly renders for Xamarin.Forms [X] Windows [ ] Build tasks Visual Studio: [ ] 2017 (version: ) [X] 2022 (version: 17.6) [ ] for Mac (version: ) Relevant plugins: [ ] Resharper (version: ) Anything else we need to know? fixed by https://github.com/unoplatform/uno.toolkit.ui/pull/860 just need to update packages
2025-04-01T04:35:49.793780
2023-08-24T14:21:40
1865276605
{ "authors": [ "davidnewhall", "groupe3sun", "platinummonkey" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:11785", "repo": "unpoller/unpoller", "url": "https://github.com/unpoller/unpoller/issues/600" }
gharchive/issue
Prometheus error I have this message : expected value after metric, got "\n" ("INVALID") while parsing: "unpoller\n" Unifi controller : <IP_ADDRESS> Prometheus : 2.45.0 sounds like a configuration issue, post your config and which version of unpoller you are on Unpoller version 2.8.1 branch HEAD revision 1 configuration prometheus : job_name: 'unpoller' metrics_path: /unifi scrape_interval: 30s static_configs: - targets: ['<IP_ADDRESS>:9130'] up.conf : [prometheus] disable = false This controls on which ip and port /metrics is exported when mode is "prometheus". This has no effect in other modes. Must contain a colon and port. http_listen = "<IP_ADDRESS>:9130" Adding an SSL Cert and Cert Key will make Poller listen with SSL/https. ssl_cert_path = "" ssl_key_path = "" Errors are rare. Setting this to true will report them to Prometheus. report_errors = false Record data for disabled or down (unlinked) switch ports. dead_ports = false Where does this message appear? Which log? Is there any more context around the error you can add? expected value after metric, got "\n" ("INVALID") while parsing: "unpoller\n" Is this still happening with the latest version?
2025-04-01T04:35:49.802879
2015-08-05T15:48:23
99236147
{ "authors": [ "somexpert" ], "license": "bsd-3-clause", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:11786", "repo": "unt-libraries/django-citeit", "url": "https://github.com/unt-libraries/django-citeit/issues/5" }
gharchive/issue
Location model accepts improperly formed locations. The location field in the Location model should be in the format specified in the code ("Locations should be in the format of Country > State > County > Township"), but currently it will accept any string that fits the length, regardless of whether it follows that convention or not. This has been deemed a non-issue.
2025-04-01T04:35:49.807788
2015-12-18T14:05:19
122954085
{ "authors": [ "geier", "untitaker" ], "license": "mit", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:11787", "repo": "untitaker/vdirsyncer", "url": "https://github.com/untitaker/vdirsyncer/pull/308" }
gharchive/pull-request
Add davical tests to Travis Fix #23 cc @geier I've ran those tests locally already (against @geier's server) and encountered absolutely no failures. BTW the code for the davical setup can be found here (as always): https://github.com/vdirsyncer/davical-testserver cpu wise, running those tests is no problem for my machine, not sure about bandwidth, but I believe that should be okay, too. I have no problem with the url being public at all, I used to host some public stuff there as well. @geier Good to hear. However, recently brutus.lostpackets.de looses all packets for me, and that for quite a while. It doesn't look like a simple IP reassignment (which I get for my provider too). My dyndns provider is moving servers, something seems to have gone wrong. I hope it's fixed soon.
2025-04-01T04:35:49.837859
2024-07-11T21:01:33
2404120694
{ "authors": [ "PavelPikat", "bobh66" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:11788", "repo": "upbound/provider-terraform", "url": "https://github.com/upbound/provider-terraform/issues/278" }
gharchive/issue
0.17.0 fails to clone remote modules: fatal: empty string is not a valid pathspec What happened? After upgrading provider-terrraform from 0.14.1 to 0.17.0 hoping to fix issues with remote repositories and max-reconcile-rate, we experience failures in all our new and existing Workspaces where the provider is unable to download remote module from our private Azure DevOps Git repository. The error observed is: connect failed: cannot get remote Terraform module: error downloading<EMAIL_ADDRESS>/usr/bin/git exited with 128: fatal: empty string is not a valid pathspec. please use . instead if you meant to match all paths Git credentials are configured via ProviderConfig's credentials, .git-credentials file How can we reproduce it? Have a Terraform module in a remote Azure DevOps Git repository. Sample Composition used to provision Azure Resource Group: apiVersion: apiextensions.crossplane.io/v1 kind: Composition metadata: name: resourcegroup-azure-v1 labels: crossplane.io/xrd: xresourcegroups provider: azure spec: compositeTypeRef: apiVersion: idp/v1 kind: XResourceGroupV1 mode: Pipeline pipeline: - step: render-templates functionRef: name: function-go-templating input: apiVersion: gotemplating.fn.crossplane.io/v1beta1 kind: GoTemplate source: Inline inline: template: | {{- $baseName := index .observed.composite.resource.metadata.labels "crossplane.io/composite" }} --- apiVersion: tf.upbound.io/v1beta1 kind: Workspace metadata: name: {{ print $baseName "-azure-rg" }} annotations: gotemplating.fn.crossplane.io/composition-resource-name: {{ print $baseName "-azure-rg" }} {{ setResourceNameAnnotation (print $baseName "-azure-rg") }} spec: providerConfigRef: name: azure forProvider: source: Remote initArgs: - -backend-config=config.azurerm.tfbackend - -backend-config=key=platform-engineering/uxp- module<EMAIL_ADDRESS> entrypoint: azure/v1/rg {{- with .observed.composite.resource.spec.parameters }} varmap: name: {{ .name }} location: {{ .location }} {{- end }} - step: automatically-detect-readiness functionRef: name: function-auto-ready What environment did it happen in? Crossplane Version: 1.16.0-up.1 Provider Version: 0.17.0 Kubernetes Version: 1.29.4 Kubernetes Distribution: AKS Rolling back to 0.16.0 fixes all failed Workspaces right away, without any other changes anywhere else Thanks @PavelPikat - I suspect that this might be a bug in the new go-getter version we pulled in. Can you try adding ?ref=<branch-or-tag> to the module specification, where branch-or-tag is the git reference you want to use (default branch) and see if that fixes the problem on 0.17? @bobh66 Thanks for the suggestion, it does help
2025-04-01T04:35:49.842252
2016-12-13T15:59:59
195292257
{ "authors": [ "AlexisMontagne", "mihaitodor" ], "license": "mit", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:11789", "repo": "upfluence/sensu-client-go", "url": "https://github.com/upfluence/sensu-client-go/pull/20" }
gharchive/pull-request
Add RabbitMQ High Availability connection support This is an attempt to add basic support for RabbitMQ High Availability (see #13). Limitations / TODOs for future PRs: Prefetch values are currently ignored No SSL support Documentation enhances @mihaitodor Feel free to update the update the deps, and we will be able to merge this PR then. @AlexisMontagne Sorry for the long delay. I finally managed to look into this and updated the dependencies as you requested. However, I'm not 100% sure it went well, because I had a lot of trouble figuring out which version of Godeps you are using in combination with Go 1.4... The latest version fails to update the 3 dependencies from sensu-go, so I tried to use an older version and to re-create the whole Godeps folder from scratch via godep save, but that messed up the imports completely. I ended up hacking it, so I hope I got it right. LGTM
2025-04-01T04:35:49.881540
2024-08-21T07:31:00
2477290207
{ "authors": [ "willson113", "ytkimirti", "zhang-xing-tfs" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:11790", "repo": "upstash/wikipedia-semantic-search", "url": "https://github.com/upstash/wikipedia-semantic-search/issues/9" }
gharchive/issue
The page displays an error when running this project It's probably because you don't have a namespace in your vector index, I will add a warning instead of throwing an error, thanks! Can you provide a documentation? How to call this work? How to use this work in your own python program? Thank you!
2025-04-01T04:35:49.959042
2016-02-20T03:07:51
135026189
{ "authors": [ "cgyarvin", "joemfb", "juped", "ohAitch" ], "license": "mit", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:11792", "repo": "urbit/arvo", "url": "https://github.com/urbit/arvo/issues/12" }
gharchive/issue
remove vestigial tap argument I do not believe I've ever seen tap:by in any form other than (~(tap by foo)). Would anyone (@cgyarvin, @philipcmonk) mind if I added a +- all (tap) to by, in, to, and converted existing code to use it? Perhaps tap should go altogether, but that would involve (minor) mucking with jets, official deprecation, etc. gas is append list elements to set, tap is append set elements to list, so I have no idea what you intend by +- all (gas) if you meant +- all (tap), you would have to replace (~(tap by s)) with, uh, (~(all by s)) if you meant something about gassing, casual containers exists even if nobody ever uses gas or tap with anything but an empty set/list, which is pretty dubious, the argument still provides the type to use Oops, definitely meant (tap) in the end there yeah. edited no, I would replace (~(tap by a)) with ~(all by a), no outer "kick" parens. The motivation here is unwarranted boilerplate. gas is definitely used, tap is always(afaict) used without arguments so it sure isn't being provided at the call site. In practice tap uses (list _?~(a !! n.a)), which is the right type. OK, sure, let's replace tap while we can still cheat it in there, but only if there's no uses. (Do I want to know why tap is a +- |=?) I mean, "tap into accumulator" sounds maybe useful, but without is definitely the common case; this is why I'm proposing a new name. Though I suppose if it does never get used, tap can be considered deprecated, and the thing renamed back to tap eventually(I recall something similar happening with union/difference). On Sunday, 21 February 2016, Raymond Pasco<EMAIL_ADDRESS>wrote: OK, sure, let's replace tap while we can still cheat it in there, but only if there's no uses. — Reply to this email directly or view it on GitHub https://github.com/urbit/arvo/issues/12#issuecomment-186855382. I would suspect it to be used in jug or whatever, but you're the proposer, presumably you've been doing due diligence? Performance optimization, so that you can e.g. (roll list-of-sets |=([a=small-set b=big-accumulator] (~(tap in a) b)) without having to re-weld every time. In practice I don't think I' ve ever seen it come up. On Sunday, 21 February 2016, Anton Dyudin<EMAIL_ADDRESS>wrote: I mean, "tap into accumulator" sounds maybe useful, but without is definitely the common case; this is why I'm proposing a new name. Though I suppose if it does never get used, tap can be considered deprecated, and the thing renamed back to tap eventually(I recall something similar happening with union/difference). On Sunday, 21 February 2016, Raymond Pasco<EMAIL_ADDRESS><javascript:_e(%7B%7D,'cvml','notifications@github.com');>> wrote: OK, sure, let's replace tap while we can still cheat it in there, but only if there's no uses. — Reply to this email directly or view it on GitHub https://github.com/urbit/arvo/issues/12#issuecomment-186855382. Right, I'd expect it in jar/jug if anywhere. Those are pretty barebones right now, though. Anyway, isn't optimization for jets? :) Can't kick the can to an unspecified future, though. If you foresee "deprecating" something, then do it now, or don't. Idk about "been", there's a reason this is phrased as "hey everyone else, have /you/ ever used the non-empty version?" But I can't remember off the top of my head seeing any hoon that used it, except a couple examples I contributed to documentation; and this has been slightly bugging me pretty much every time I have the extra parens in a one-liner, and I couldn't think of any usage any of those times either. Wrt jugs, I think they currently don't /have/ tap/gas; but they should, and that would be valid usage yes. Though I wouldn't be too worried if tap in current form /didn't/ already exist, the definition used weld, and got jetted if there were memory thrashing problems(which there would be from function calls anyway). On Sunday, 21 February 2016, Raymond Pasco<EMAIL_ADDRESS>wrote: I would suspect it to be used in jug or whatever, but you're the proposer, presumably you've been doing due diligence? — Reply to this email directly or view it on GitHub https://github.com/urbit/arvo/issues/12#issuecomment-186855805. I'd prefer to have a set timeline, so everybody gets a chance to quietly port their branches; but fair enough Jugs have some kind of gas. gas gasses into an accumulator too, it's just that it's the prettier (gas:in etc) when you don't, which doesn't offend your taste. The issue is that when the incredibly informal times end, changing jetted library functions will be a lot more difficult. And probably involve sacrificing a kelvin. I think it's just bad early Hoon style, however - I see accumulator arguments that are never actually replaced in the sample in a lot of old code. If tap didn't tap into an accumulator, we'd never miss it... I believe the official solution for strongly bound names is to come up with new names(see: marks), which doesn't sit well with me in general but is probably fine in this case. A kelvin would have to be sacrificed anyway if you were to add the functionality in the first place, and make it checkable by /?. In practice this is folded into breaches :/ Generally I feel like we are at(and started at!) far too few kelvins, preventing good development practices. Minor version numbers are useful. On Sunday, 21 February 2016, Raymond Pasco<EMAIL_ADDRESS>wrote: And probably involve sacrificing a kelvin. — Reply to this email directly or view it on GitHub https://github.com/urbit/arvo/issues/12#issuecomment-186856773. Of course the non-empty version is there because otherwise you need to weld. But I would be perfectly ok with ~(all by set), etc. We also need a ++li that's the equivalent of in and by. This can be done later with plenty of bikeshedding, though. On Sun, Feb 21, 2016 at 8:54 AM, Anton Dyudin<EMAIL_ADDRESS>wrote: I'd prefer to have a set timeline, so everybody gets a chance to quietly port their branches; but fair enough — Reply to this email directly or view it on GitHub https://github.com/urbit/arvo/issues/12#issuecomment-186856663. It's there, the question is whether it is ever used, especially in unjetted code. Wrt ++li, if the syntax was #16 made less heavy-handed maybe, and I agree that limo/homo should go to the constructors section, but cf. (~(gas by a) b), which is both there and used, because having to put:by one at a time would suck. No analogous case exists for tap that I can think of. Only vaguely apropos, but I seem to recall a (map something ~) in extant code, which should obviously be a (set something). It's in arch for performance reasons, being stripped from a (map something value). I am not sure how justified those performance reasons actually are. On Sunday, 21 February 2016, Raymond Pasco<EMAIL_ADDRESS>wrote: cf. (~(gas by a) b), which is both there and used, because having to put:by one at a time would suck. No analogous case exists for tap that I can think of. Only vaguely apropos, but I seem to recall a (map something ~) in extant code, which should obviously be a (set something). — Reply to this email directly or view it on GitHub https://github.com/urbit/arvo/issues/12#issuecomment-186913051. And while we're making up heap-based container wishlists, I need to add an arm for treap join, the other internal algorithm operation besides treap split (+-bif) - it's in the curious position of being jetted (as a helper function) but not implemented in Hoon... Treap join as separate from +- uni? I think it's not an arm because it's interleaved with most of the algorithms. P On Sunday, 21 February 2016, Raymond Pasco<EMAIL_ADDRESS>wrote: And while we're making up heap-based container wishlists, I need to add an arm for treap join, the other internal algorithm operation besides treap split (+-bif) - it's in the curious position of being jetted (as a helper function) but not implemented in Hoon... — Reply to this email directly or view it on GitHub https://github.com/urbit/arvo/issues/12#issuecomment-186914740. Join is not union, it's a lower-level operation just like split is. The algorithms are naive. It's not an arm because I couldn't think of a punchy three-letter name. I should be able to given time. :) Ah, the thing at the bottom of +-del which assumes nonintersection? +-mer perhaps, I can see why del being defined as ?~(a ,~ |=(b/_n.a (mer (bif b)))) would be cleaner. On Sunday, 21 February 2016, Raymond Pasco<EMAIL_ADDRESS>wrote: Join is not union, it's a lower-level operation just like split is. The algorithms are naive. It's not an arm because I couldn't think of a punchy three-letter name. I should be able to given time. :) — Reply to this email directly or view it on GitHub https://github.com/urbit/arvo/issues/12#issuecomment-186918361. It's a matter of following the standard literature. mer is an alright name, if a bit aquatic... Wait no you still need to descend nvm On Sunday, 21 February 2016, Anton Dyudin<EMAIL_ADDRESS>wrote: Ah, the thing at the bottom of +-del which assumes nonintersection? +-mer perhaps, I can see why del being defined as ?~(a ,~ |=(b/_n.a (mer (bif b)))) would be cleaner. On Sunday, 21 February 2016, Raymond Pasco<EMAIL_ADDRESS><javascript:_e(%7B%7D,'cvml','notifications@github.com');>> wrote: Join is not union, it's a lower-level operation just like split is. The algorithms are naive. It's not an arm because I couldn't think of a punchy three-letter name. I should be able to given time. :) — Reply to this email directly or view it on GitHub https://github.com/urbit/arvo/issues/12#issuecomment-186918361. wants to comment the first loop in del a "=+(axe=(dig b) ?~(axe a a(+{axe} $(a +{axe}.a))))" now On Sunday, 21 February 2016, Anton Dyudin<EMAIL_ADDRESS>wrote: Wait no you still need to descend nvm On Sunday, 21 February 2016, Anton Dyudin<EMAIL_ADDRESS><javascript:_e(%7B%7D,'cvml','antechno777@gmail.com');>> wrote: Ah, the thing at the bottom of +-del which assumes nonintersection? +-mer perhaps, I can see why del being defined as ?~(a ,~ |=(b/_n.a (mer (bif b)))) would be cleaner. On Sunday, 21 February 2016, Raymond Pasco<EMAIL_ADDRESS>wrote: Join is not union, it's a lower-level operation just like split is. The algorithms are naive. It's not an arm because I couldn't think of a punchy three-letter name. I should be able to given time. :) — Reply to this email directly or view it on GitHub https://github.com/urbit/arvo/issues/12#issuecomment-186918361. (if you're cleaning up h/h anyway) pls no this is not an api-opaque refactoring It is if you add it as a new name and then breach that part out, but yeah the "cleaning" involves jets being moved to %zuse in a way I was assuming would require a vere revision anyway. On Monday, 7 March 2016, Raymond Pasco<EMAIL_ADDRESS>wrote: pls no this is not an api-opaque refactoring — Reply to this email directly or view it on GitHub https://github.com/urbit/arvo/issues/12#issuecomment-193571159. i said api not abi, i know one letter is just the other vertically flipped but there is a big difference! Sure, I don't see how you'd execute the latter in a way that wouldn't allow for the former though. On Monday, 7 March 2016, Raymond Pasco<EMAIL_ADDRESS>wrote: i said api not abi, i know one letter is just the other vertically flipped but there is a big difference! — Reply to this email directly or view it on GitHub https://github.com/urbit/arvo/issues/12#issuecomment-193574821. Do you remember all the stuff you were just complaining about re: electroplating &c.? You cannot do this stuff en passant, or worse, encourage Curtis to. We should have a new name which is argumentless tap. tap must not change. Sent from my iPhone On Mar 7, 2016, at 7:23 PM, Raymond Pasco<EMAIL_ADDRESS>wrote: Do you remember all the stuff you were just complaining about re: electroplating &c.? You cannot do this stuff en passant, or worse, encourage Curtis to. — Reply to this email directly or view it on GitHub. Nothing must change "while you're there" refactoring hoon.hoon (well, not quite nothing, removing whitespace from the ends of lines would be cool, for instance :)) This seems like a nice thing to fold into cc-release. Is your preference still a new name for argument-less +-tap, or could it just be changed? Good question. I'd say the answer is "discuss that on the thread for the actual patch" :) This doesn't rise to the level of an issue. It's a potential feature of very dubious autistic value. I appreciate the desire to be done with these, but careful API design isn't of "very dubious value", especially when concerning core language containers. Reopening in order to close with pointer Closed, discussion continues on #352
2025-04-01T04:35:49.982290
2024-12-13T11:28:48
2738177528
{ "authors": [ "Will508", "urbste" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:11793", "repo": "urbste/nanosam2", "url": "https://github.com/urbste/nanosam2/issues/4" }
gharchive/issue
No train.py in tool folder Good job! Hi, I saw in the README that there is a train.py in the tool directory, but it is actually missing. Could you please upload it? Hey please look here: nanosam2/tools/
2025-04-01T04:35:50.207049
2022-10-26T18:33:16
1424524691
{ "authors": [ "ColinC5" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:11795", "repo": "utk-cs340-fall22/BattleBeasts", "url": "https://github.com/utk-cs340-fall22/BattleBeasts/issues/64" }
gharchive/issue
Bracket chooses modifier and attacks for player's opponent right before fight starts Must do this using the restrictions in the json files like in the beast customization scene. Communicated to the fight scene through Globals.cs. Parent issue of #70
2025-04-01T04:35:50.208774
2020-04-04T00:14:21
593701385
{ "authors": [ "osy86", "yanzhang0219" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:11796", "repo": "utmapp/UTM", "url": "https://github.com/utmapp/UTM/issues/221" }
gharchive/issue
"Could not connect to AltServer" when installing it onto my iPad iPadOS version: iOS 13.4.5 developer beta iPad Pro 10.5-inch macOS version: 10.15.3 Thank you. This is not a UTM bug. Please contact AltServer's developers.
2025-04-01T04:35:50.215229
2021-01-23T11:53:20
792539157
{ "authors": [ "Vishram1123", "barantamur", "osy" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:11797", "repo": "utmapp/UTM", "url": "https://github.com/utmapp/UTM/issues/2281" }
gharchive/issue
"+" and "Edit" buttons not working Describe the issue After opening the app, I touch the "+" and "Edit" buttons(Only buttons in the main menu) but nothing happens. Configuration (required) UTM Version: 2.0.18 OS Version: iOS 14.2 Device Model: iPhone 7 (9,1) Checkra1n jailbreaked. Procursus bootstrap. Installed via cydia repo Crash log It didn't crash. Debug log Cannot use the menu Upload VM I cannot create VMS. I cannot reproduce this. What language is your phone set to? Do you have any tweaks/extensions that could affect how things are displayed? I cannot reproduce this. What language is your phone set to? Do you have any tweaks/extensions that could affect how things are displayed? I cannot reproduce this. What language is your phone set to? Do you have any tweaks/extensions that could affect how things are displayed? My language is: English My tweaks(I only listed which affect how things are displayed): Little11(not enabled), Snowboard, XenHTML, Rofi(Not enabled), A-Font(not enabled),MobileGoose(not enabled) I cannot reproduce this. What language is your phone set to? Do you have any tweaks/extensions that could affect how things are displayed? My language is: English My tweaks(I only listed which affect how things are displayed): Little11(not enabled), Snowboard, XenHTML, Rofi(Not enabled), A-Font(not enabled),MobileGoose(not enabled) I cannot reproduce this. What language is your phone set to? Do you have any tweaks/extensions that could affect how things are displayed? My language is: English My tweaks(I only listed which affect how things are displayed): Little11(not enabled), Snowboard, XenHTML, Rofi(Not enabled), A-Font(not enabled),MobileGoose(not enabled) Edit: I use FiveDock13 too. You might want to upgrade to little12 as it causes some issues in iOS 14. For example, the home screen is broken when you use it. That may be part of the problem. I am closing this as I cannot reproduce but if you run into this error without running any tweaks (non-jailbroken mode), please reopen.
2025-04-01T04:35:50.229893
2021-01-24T23:35:38
792929233
{ "authors": [ "jdanyow", "mwt" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:11798", "repo": "utterance/utterances", "url": "https://github.com/utterance/utterances/pull/469" }
gharchive/pull-request
Update SITES.md Implemented in two of my projects. Thank you for the great tool! Thanks for the PR! We've simplified the process of adding your site/repo to the list. Instead of updating SITES.md you would add the utterances topic to your repo.
2025-04-01T04:35:50.237356
2017-06-14T07:17:28
235784239
{ "authors": [ "Meijuh", "alaarman", "yanntm" ], "license": "bsd-3-clause", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:11799", "repo": "utwente-fmt/ltsmin", "url": "https://github.com/utwente-fmt/ltsmin/issues/118" }
gharchive/issue
Always Return 0, but state if a counter-example was found or not for LTL Currently, the return value 1 is supposed to be used to determine if a violation was found or not. This behavior leads to a lot of problems for me, because the tool DID NOT FAIL, but returns non zero, so this is not a standard behavior for a Unix command. Please Add a message that states "violation not found", to complement the existing "violation found" Drop the non zero return value when no runtime error happens. I get this sort of stuff : Jun 13, 2017 11:43:24 PM fr.lip6.move.gal.itstools.Runner runTool INFO: Standard error output from running tool CommandLine [args=[/home/travis/build/yanntm/ITS-Tools-pnmcc/lts_install_dir/bin/pins2lts-seq, ./gal.so, --when, --ltl, ()U(X())], workingDir=/home/travis/build/yanntm/ITS-Tools-pnmcc/INPUTS/SafeBus-COL-03] /home/travis/build/yanntm/ITS-Tools-pnmcc/lts_install_dir/bin/pins2lts-seq: /lib/x86_64-linux-gnu/libc.so.6: version `GLIBC_2.17' not found (required by /home/travis/build/yanntm/ITS-Tools-pnmcc/lts_install_dir/bin/pins2lts-seq) Exit code :1 FORMULA SafeBus-COL-03-LTLCardinality-1 TRUE TECHNIQUES PARTIAL_ORDER EXPLICIT LTSMIN SAT_SMT So yes, it's my fault the GLIBC version was wrong, but as you can see my verdict thing is in sore trouble to deduce what actually happened, it did see return value 1 but did not crash on it, because it's "supposed" to be "normal" for ltsmin to give non zero values back. You cannot as developer of a tool control all the error codes I believe this is a good example of it. The Unix standard is to return 0 unless "something bad" happened. Finding counter-examples is not "something bad". This is by design. A wrapper script can solve the issue. In principle, we assume the return value 1 is reserved. I am not against the "violation not found" output. Ok, I'll take the 'violation not found', that solves my tool chain issue. I guess this non zero return value is used in other toolchains. It might be by design, but it does not respect standards, which are important too imho. Here I'm getting return code 1 from running the tool, not by design (by Unix standard !), but I don't see how you could catch this from the code. A wrapper looking at the output could return 1 pretty easily (iff. return code 0 and analysis of output ...) so the argument holds both ways. But a contrario, I can't wrap correctly if the tool returns 1 not by design, such as this case, and only testing absence of "violation found" in output is weak in case the tool fails. Anyway, let's go for "violation not found". Also other tools in the suite don't behave homogeneously, e.g. -d does not report 1 if it found deadlocks. At least with -seq that I was using. So I don't know why the LTL case is different. I fixed for the case of the sequential backend. Now it still has to be reported for LTL properties. @alaarman can you fix this for LTL properties? LTSmin now outputs that the formula holds.
2025-04-01T04:35:50.242457
2017-06-25T04:29:43
238359259
{ "authors": [ "ashishkumar4029", "johnlevi", "karim-elngr", "midoBi" ], "license": "bsd-3-clause", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:11800", "repo": "uvagfx/hipi", "url": "https://github.com/uvagfx/hipi/issues/41" }
gharchive/issue
java.lang.ClassNotFoundException: com.drew.imaging.ImageProcessingException exception when running ./hibDump .... +1 @midoBi please provide more details about your problem. @midoBi Are you sure that your machines have HIPI properly configured in the class path? I had to build a FAT JAR to include the HIPI dependencies for my program such as TwelveMonkeys image io. https://www.karimelnaggar.com/2017/11/25/compiling-hipi-programs-with-dependencies-for-hadoop-clusters-using-gradle/
2025-04-01T04:35:50.261541
2022-04-11T06:10:23
1199442652
{ "authors": [ "Shehryar21" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:11801", "repo": "uwblueprint/community-fridge-kw", "url": "https://github.com/uwblueprint/community-fridge-kw/pull/203" }
gharchive/pull-request
feat: admin export food rescues and checkins Brief description. What is this change? Admin Export All Fridge Check-ins/Food Rescues Implementation description. How did you make this change? Added generateCSV utils file back Steps to test Create some checkins and schedules Go to http://localhost:3000/admin/check-ins and click on export. It should download a csv file. Verify that the data outputted in the csv file is accurate. Go to http://localhost:3000/admin/view-donations and click on export. It should download a csv file. Verify that the data outputted in the csv file is accurate. Checklist [x] My PR name is descriptive and in imperative tense [x] My commit messages follow conventional commits and are descriptive. My commits are atomic and trivial commits are squashed or fixup'd into non-trivial commits [x] I have run the appropriate linter(s) [x] I have requested a review from the PL, as well as other devs who have background knowledge on this PR or who will be building on top of this PR [ ] The appropriate tests if necessary have been written Functionality works for me. From the admin's perspective, these dates are pretty hard to read - can we format this properly when it's being mapped in getCSVData? also two other questions Should we be fetching all checkins/schedules or just upcoming? Is it worth it to semi optimize the fetching function to store volunteerId and donorId's in a hashmap or something so that we're not fetching it from the backend for every single entry? We should be fetching all (confirmed with Amy) tbh dont think its necessary. Given that fetching only happens on export, it wont happen that often anyways.
2025-04-01T04:35:50.262626
2017-01-05T19:28:43
199038411
{ "authors": [ "achen2401", "ivan-c" ], "license": "bsd-3-clause", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:11802", "repo": "uwcirg/true_nth_usa_portal", "url": "https://github.com/uwcirg/true_nth_usa_portal/pull/379" }
gharchive/pull-request
unescape ampersand character in assessment links fix escaped ampersand link in assessment links in profile Looks good; merging Thanks!
2025-04-01T04:35:50.264199
2019-04-08T19:28:11
430619327
{ "authors": [ "achen2401" ], "license": "BSD-3-Clause", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:11803", "repo": "uwcirg/truenth-portal", "url": "https://github.com/uwcirg/truenth-portal/pull/3117" }
gharchive/pull-request
TN-408 fix organization JSON for UCSF urologic clinic per convo related to this story: https://jira.movember.com/browse/TN-408 Fix organization JSON for UCSF Urologic Surgical Oncology Clinic so it is the child clinic of UCSF. @mcjustin @ivan-c Fixing the order of JSON entries seemed to fix the failed tests, at least in my instance, thank you for the advices.
2025-04-01T04:35:50.265031
2015-07-29T06:29:08
97869948
{ "authors": [ "domoritz", "kanitw" ], "license": "bsd-3-clause", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:11804", "repo": "uwdata/voyager", "url": "https://github.com/uwdata/voyager/issues/176" }
gharchive/issue
Need to move cursor to trigger rendering So when you select a field and not move the cursor, nothing happens. The new gallery of visualizations loads as soon as you move the cursor. Sounds like we have seen this before ...
2025-04-01T04:35:50.266113
2024-01-26T20:38:35
2102868957
{ "authors": [ "uweseimet" ], "license": "BSD-3-Clause", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:11805", "repo": "uweseimet/scsi2pi", "url": "https://github.com/uweseimet/scsi2pi/issues/48" }
gharchive/issue
Split bus into a target and an initiator bus In target mode the bus code for the initiator is not needed, and vice versa. It may make sense to re-organize the GpioBus and Bus classes and split them into a target and an initiator part. With this approach the explicit target/initiator mode checks in GpioBus will become obsolete and the difference between initiator and target becomes more explicit. Doing this would not be worth the effort.
2025-04-01T04:35:50.273297
2023-08-21T15:55:04
1859679595
{ "authors": [ "briesenberg07", "cspayne", "gerontakos" ], "license": "CC0-1.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:11806", "repo": "uwlib-cams/uwlswd", "url": "https://github.com/uwlib-cams/uwlswd/issues/41" }
gharchive/issue
html improvements [ ] ensure rdfa meets specifications and standards [ ] create css file to format table and any other aesthetic properties of web page validators: https://validator.w3.org/nu/ (works with .xhtml) http://linter.structured-data.org/ @briesenberg07 @gerontakos I just pushed uwlswd_vocabs/linked_data_platforms.html with these changes set up so they are visible on the live site. linked_data_platforms.rdf now reflects the rdflib style as well. Let me know if you have any feedback, comments, or concerns! 🎉 🎉 Wowow looking great! A nice update to the presentation IMHO! I'd like to talk about how to handle b-nodes in the HTML+RDFa presentation. I've thought about a couple of options: Present bnode near related data: Potentially problematic; what if the same bnode is referenced in multiple places--where to put it in the table? Present bnode objects in table as links to bnode subjects in the table: Seems do-able; if same bnode is referenced multiple times in table, each reference could link to the bnode description's anchor in table Most certainly improved, thank you. However, it's still an ugly data table. Can we dress it up, put on some jewelry maybe, even tattoos? Something. What can we do? A data table is not uncommon, right? Is there another way to represent it? Maybe colors would make it more appealing? Make it seem like "serious data" or something? Maybe we could also offer something less linear, some kind of visualizations? That wouldn't be in place of this data but in-addition-to. I'm not particular about bnode representation; however, the bnode in this dataset is flat-out wrong. There's no partition. "Let us be forgiving of the mistakes we make." Also, what in heaven is a "platform"? the RDF/XML is much easier to read and, I expect, easier to process. see test_format.html Nice! The links above the table -- are they now redundant and unnecessary? Nice! The links above the table -- are they now redundant and unnecessary? They aren't redundant quite yet! I may try to move them so they appear under the tab displaying that format, but if anyone wants to download and use those files, the links will still be useful.
2025-04-01T04:35:50.301463
2022-05-22T18:37:13
1244334531
{ "authors": [ "codecov-commenter", "v3gtb" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:11807", "repo": "v3gtb/fooddata-vegattributes", "url": "https://github.com/v3gtb/fooddata-vegattributes/pull/61" }
gharchive/pull-request
Add coverage report via codecov to CI Fixes #60 Codecov Report :exclamation: No coverage uploaded for pull request base (main@d9fcd65). Click here to learn what that means. The diff coverage is n/a. @@ Coverage Diff @@ ## main #61 +/- ## ======================================= Coverage ? 60.50% ======================================= Files ? 22 Lines ? 519 Branches ? 0 ======================================= Hits ? 314 Misses ? 205 Partials ? 0 Continue to review full report at Codecov. Legend - Click here to learn more Δ = absolute <relative> (impact), ø = not affected, ? = missing data Powered by Codecov. Last update d9fcd65...70f6e5f. Read the comment docs.
2025-04-01T04:35:50.364587
2022-08-15T15:00:26
1339111989
{ "authors": [ "cromoteca" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:11808", "repo": "vaadin/hilla", "url": "https://github.com/vaadin/hilla/pull/504" }
gharchive/pull-request
fix(engine): add support for custom connect client Adds a check to verify if a file named frontend/connect-client.ts exists, just like in the single-module generator. Fixes #485 Looks like the upgrade to Lit 2.3.0 (coming from Flow) broke a test which is not related to this PR.
2025-04-01T04:35:50.367964
2024-05-03T09:44:30
2277312636
{ "authors": [ "MarcinVaadin", "landsman" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:11809", "repo": "vaadin/intellij-plugin", "url": "https://github.com/vaadin/intellij-plugin/issues/39" }
gharchive/issue
Hilla support I have a problem with probably missing the support for Hilla. When I click on to “sayHello” method I would actually love to be redirected to “HelloWorldService.java” not a compiled .ts file. This standard behavior can be overridden by the custom plugin I hope. Also, public classes and methods look unused in the IDE. Are these features part of the already existing plugins? Or there have to be introduced a new plugin, especially for Hilla? Without this, it’s a bad developer experience for the future. I totally understand that this is not an easy thing to do, but it’s one of the crucial parts of modern development that we’ve become used to, and we’ve used it for years (in the same language/framework). So it would be nice to make this work as well. originally posted on forum: https://vaadin.com/forum/t/intellij-idea-support-for-hilla/166093 Thanks for creating this issue. In current version plugin downloads and extracts starter projects. It does not have full support for navigating between Hilla generated endpoints and their source implementation.
2025-04-01T04:35:50.421438
2020-05-27T12:09:29
625639648
{ "authors": [ "CLAassistant", "DiegoCardoso" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:11810", "repo": "vaadin/vaadin-checkbox", "url": "https://github.com/vaadin/vaadin-checkbox/pull/174" }
gharchive/pull-request
feat: add helper text API Add property and named slot Add styles for supported themes Add demo for helper text API Add unit and visual tests for helper text Part of #170 Thank you for your submission! We really appreciate it. Like many open source projects, we ask that you sign our Contributor License Agreement before we can accept your contribution.You have signed the CLA already but the status is still pending? Let us recheck it.
2025-04-01T04:35:50.498279
2020-01-17T07:49:07
551258947
{ "authors": [ "gilescope", "robinchrist", "vadimcn" ], "license": "mit", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:11811", "repo": "vadimcn/vscode-lldb", "url": "https://github.com/vadimcn/vscode-lldb/issues/252" }
gharchive/issue
Cannot Attach: Could not send event to DebugSession: "Full(..)" OS: Kubuntu 19.10 (Kernel 5.3.0-26) VSCode version: 1.41.1 Extension version: 1.4.5 Python version: 3.7.5 LLDB version: lldb version 9.0.0 If I try to attach to a process, I get the following error: configuration: { type: 'lldb', request: 'attach', name: 'Debug', pid: '${command:pickMyProcess}', relativePathBase: '/home/robin/dev/<redacted>/ReproAlex16012020/node-corelib' } Listening on port 41847 [2020-01-17T07:42:57Z ERROR codelldb::debug_session] Could not send event to DebugSession: "Full(..)" [2020-01-17T07:42:57Z ERROR codelldb::debug_session] Could not send event to DebugSession: "Full(..)" [2020-01-17T07:42:57Z ERROR codelldb::debug_session] Could not send event to DebugSession: "Full(..)" [2020-01-17T07:42:57Z ERROR codelldb::debug_session] Could not send event to DebugSession: "Full(..)" [2020-01-17T07:42:57Z ERROR codelldb::debug_session] Could not send event to DebugSession: "Full(..)" [2020-01-17T07:42:57Z ERROR codelldb::debug_session] Could not send event to DebugSession: "Full(..)" [2020-01-17T07:42:57Z ERROR codelldb::debug_session] Could not send event to DebugSession: "Full(..)" [2020-01-17T07:42:57Z ERROR codelldb::debug_session] Could not send event to DebugSession: "Full(..)" [2020-01-17T07:42:57Z ERROR codelldb::debug_session] Could not send event to DebugSession: "Full(..)" Debug log configuration: { type: 'lldb', request: 'attach', name: 'Debug', pid: '${command:pickMyProcess}', relativePathBase: '/home/robin/dev//node-corelib' } liblldb: /home/robin/.vscode/extensions/vadimcn.vscode-lldb-1.4.5/lldb/lib/liblldb.so libpython: libpython3.7m.so.1.0 environment: {} params: {} Listening on port 34443 [2020-01-17T07:46:15Z DEBUG codelldb] New debug session INFO(Python) 08:46:15 rust: Initializing, module name=rust DEBUG(Python) 08:46:15 rust: attaching summary get_tuple_summary to "^\(.*\)$", is_regex=True DEBUG(Python) 08:46:15 rust: attaching synthetic StrSliceSynthProvider to "&str", is_regex=False DEBUG(Python) 08:46:15 rust: attaching summary _get_synth_summary_StrSliceSynthProvider to "&str", is_regex=False DEBUG(Python) 08:46:15 rust: attaching synthetic StrSliceSynthProvider to "str*", is_regex=False DEBUG(Python) 08:46:15 rust: attaching summary _get_synth_summary_StrSliceSynthProvider to "str*", is_regex=False DEBUG(Python) 08:46:15 rust: attaching synthetic StdStringSynthProvider to "collections::string::String", is_regex=False DEBUG(Python) 08:46:15 rust: attaching summary _get_synth_summary_StdStringSynthProvider to "collections::string::String", is_regex=False DEBUG(Python) 08:46:15 rust: attaching synthetic StdStringSynthProvider to "alloc::string::String", is_regex=False DEBUG(Python) 08:46:15 rust: attaching summary _get_synth_summary_StdStringSynthProvider to "alloc::string::String", is_regex=False DEBUG(Python) 08:46:15 rust: attaching synthetic StdVectorSynthProvider to "^collections::vec::Vec<.+>$", is_regex=True DEBUG(Python) 08:46:15 rust: attaching summary _get_synth_summary_StdVectorSynthProvider to "^collections::vec::Vec<.+>$", is_regex=True DEBUG(Python) 08:46:15 rust: attaching synthetic StdVectorSynthProvider to "^alloc::vec::Vec<.+>$", is_regex=True DEBUG(Python) 08:46:15 rust: attaching summary _get_synth_summary_StdVectorSynthProvider to "^alloc::vec::Vec<.+>$", is_regex=True DEBUG(Python) 08:46:15 rust: attaching synthetic SliceSynthProvider to "^&(mut\s*)?\[.*\]$", is_regex=True DEBUG(Python) 08:46:15 rust: attaching summary _get_synth_summary_SliceSynthProvider to "^&(mut\s*)?\[.*\]$", is_regex=True DEBUG(Python) 08:46:15 rust: attaching synthetic SliceSynthProvider to "^slice<.+>.*$", is_regex=True DEBUG(Python) 08:46:15 rust: attaching summary _get_synth_summary_SliceSynthProvider to "^slice<.+>.*$", is_regex=True DEBUG(Python) 08:46:15 rust: attaching synthetic StdCStringSynthProvider to "std::ffi::c_str::CString", is_regex=False DEBUG(Python) 08:46:15 rust: attaching summary _get_synth_summary_StdCStringSynthProvider to "std::ffi::c_str::CString", is_regex=False DEBUG(Python) 08:46:15 rust: attaching synthetic StdCStrSynthProvider to "std::ffi::c_str::CStr", is_regex=False DEBUG(Python) 08:46:15 rust: attaching summary _get_synth_summary_StdCStrSynthProvider to "std::ffi::c_str::CStr", is_regex=False DEBUG(Python) 08:46:15 rust: attaching synthetic StdOsStringSynthProvider to "std::ffi::os_str::OsString", is_regex=False DEBUG(Python) 08:46:15 rust: attaching summary _get_synth_summary_StdOsStringSynthProvider to "std::ffi::os_str::OsString", is_regex=False DEBUG(Python) 08:46:15 rust: attaching synthetic StdOsStrSynthProvider to "std::ffi::os_str::OsStr", is_regex=False DEBUG(Python) 08:46:15 rust: attaching summary _get_synth_summary_StdOsStrSynthProvider to "std::ffi::os_str::OsStr", is_regex=False DEBUG(Python) 08:46:15 rust: attaching synthetic StdPathBufSynthProvider to "std::path::PathBuf", is_regex=False DEBUG(Python) 08:46:15 rust: attaching summary _get_synth_summary_StdPathBufSynthProvider to "std::path::PathBuf", is_regex=False DEBUG(Python) 08:46:15 rust: attaching synthetic StdPathSynthProvider to "std::path::Path", is_regex=False DEBUG(Python) 08:46:15 rust: attaching summary _get_synth_summary_StdPathSynthProvider to "std::path::Path", is_regex=False DEBUG(Python) 08:46:15 rust: attaching synthetic StdRcSynthProvider to "^alloc::rc::Rc<.+>$", is_regex=True DEBUG(Python) 08:46:15 rust: attaching summary _get_synth_summary_StdRcSynthProvider to "^alloc::rc::Rc<.+>$", is_regex=True DEBUG(Python) 08:46:15 rust: attaching synthetic StdRcSynthProvider to "^alloc::rc::Weak<.+>$", is_regex=True DEBUG(Python) 08:46:15 rust: attaching summary _get_synth_summary_StdRcSynthProvider to "^alloc::rc::Weak<.+>$", is_regex=True DEBUG(Python) 08:46:15 rust: attaching synthetic StdArcSynthProvider to "^alloc::(sync|arc)::Arc<.+>$", is_regex=True DEBUG(Python) 08:46:15 rust: attaching summary _get_synth_summary_StdArcSynthProvider to "^alloc::(sync|arc)::Arc<.+>$", is_regex=True DEBUG(Python) 08:46:15 rust: attaching synthetic StdArcSynthProvider to "^alloc::(sync|arc)::Weak<.+>$", is_regex=True DEBUG(Python) 08:46:15 rust: attaching summary _get_synth_summary_StdArcSynthProvider to "^alloc::(sync|arc)::Weak<.+>$", is_regex=True DEBUG(Python) 08:46:15 rust: attaching synthetic StdMutexSynthProvider to "^std::sync::mutex::Mutex<.+>$", is_regex=True DEBUG(Python) 08:46:15 rust: attaching summary _get_synth_summary_StdMutexSynthProvider to "^std::sync::mutex::Mutex<.+>$", is_regex=True DEBUG(Python) 08:46:15 rust: attaching synthetic StdCellSynthProvider to "^core::cell::Cell<.+>$", is_regex=True DEBUG(Python) 08:46:15 rust: attaching summary _get_synth_summary_StdCellSynthProvider to "^core::cell::Cell<.+>$", is_regex=True DEBUG(Python) 08:46:15 rust: attaching synthetic StdRefCellSynthProvider to "^core::cell::RefCell<.+>$", is_regex=True DEBUG(Python) 08:46:15 rust: attaching summary _get_synth_summary_StdRefCellSynthProvider to "^core::cell::RefCell<.+>$", is_regex=True DEBUG(Python) 08:46:15 rust: attaching synthetic StdRefCellBorrowSynthProvider to "^core::cell::Ref<.+>$", is_regex=True DEBUG(Python) 08:46:15 rust: attaching summary _get_synth_summary_StdRefCellBorrowSynthProvider to "^core::cell::Ref<.+>$", is_regex=True DEBUG(Python) 08:46:15 rust: attaching synthetic StdRefCellBorrowSynthProvider to "^core::cell::RefMut<.+>$", is_regex=True DEBUG(Python) 08:46:15 rust: attaching summary _get_synth_summary_StdRefCellBorrowSynthProvider to "^core::cell::RefMut<.+>$", is_regex=True DEBUG(Python) 08:46:15 rust: attaching synthetic StdHashMapSynthProvider to "^std::collections::hash::map::HashMap<.+>$", is_regex=True DEBUG(Python) 08:46:15 rust: attaching summary _get_synth_summary_StdHashMapSynthProvider to "^std::collections::hash::map::HashMap<.+>$", is_regex=True DEBUG(Python) 08:46:15 rust: attaching synthetic StdHashSetSynthProvider to "^std::collections::hash::set::HashSet<.+>$", is_regex=True DEBUG(Python) 08:46:15 rust: attaching summary _get_synth_summary_StdHashSetSynthProvider to "^std::collections::hash::set::HashSet<.+>$", is_regex=True [2020-01-17T07:46:15Z DEBUG codelldb::wire_protocol] --> {"command":"initialize","arguments":{"clientID":"vscode","clientName":"Visual Studio Code","adapterID":"lldb","pathFormat":"path","linesStartAt1":true,"columnsStartAt1":true,"supportsVariableType":true,"supportsVariablePaging":true,"supportsRunInTerminalRequest":true,"locale":"en-us"},"type":"request","seq":1} [2020-01-17T07:46:15Z DEBUG codelldb::wire_protocol] <-- {"type":"response","request_seq":1,"success":true,"command":"initialize","body":{"exceptionBreakpointFilters":[{"default":true,"filter":"cpp_throw","label":"C++: on throw"},{"default":false,"filter":"cpp_catch","label":"C++: on catch"}],"supportTerminateDebuggee":true,"supportsCompletionsRequest":true,"supportsConditionalBreakpoints":true,"supportsConfigurationDoneRequest":true,"supportsDataBreakpoints":true,"supportsDelayedStackTraceLoading":true,"supportsEvaluateForHovers":true,"supportsFunctionBreakpoints":true,"supportsGotoTargetsRequest":true,"supportsHitConditionalBreakpoints":true,"supportsLogPoints":true,"supportsRestartFrame":true,"supportsSetVariable":true}} [2020-01-17T07:46:15Z DEBUG codelldb::wire_protocol] --> {"command":"attach","arguments":{"type":"lldb","request":"attach","name":"Debug","pid":"8265","relativePathBase":"/home/robin/dev//node-corelib","_adapterSettings":{"displayFormat":"auto","showDisassembly":"auto","dereferencePointers":true,"suppressMissingSourceFiles":true,"evaluationTimeout":5,"consoleMode":"commands","sourceLanguages":null},"__sessionId":"cd67a189-99b7-4f14-a811-b82c492b03a0"},"type":"request","seq":2} [2020-01-17T07:46:15Z DEBUG codelldb::wire_protocol] <-- {"type":"event","seq":1,"event":"initialized"} [2020-01-17T07:46:15Z DEBUG codelldb::wire_protocol] --> {"command":"setBreakpoints","arguments":{"source":{"name":"SVArray.cpp","path":"/home/robin/dev//node-corelib/-corelib/src/array/SVA/SVArray.cpp"},"lines":[],"breakpoints":[],"sourceModified":false},"type":"request","seq":3} [2020-01-17T07:46:15Z DEBUG codelldb::wire_protocol] <-- {"type":"response","request_seq":3,"success":true,"command":"setBreakpoints","body":{"breakpoints":[]}} [2020-01-17T07:46:15Z DEBUG codelldb::wire_protocol] --> {"command":"setBreakpoints","arguments":{"source":{"name":"SVArray.hpp","path":"/home/robin/dev//node-corelib/-corelib/include/array/SVA/SVArray.hpp"},"lines":[],"breakpoints":[],"sourceModified":false},"type":"request","seq":4} [2020-01-17T07:46:15Z DEBUG codelldb::wire_protocol] <-- {"type":"response","request_seq":4,"success":true,"command":"setBreakpoints","body":{"breakpoints":[]}} [2020-01-17T07:46:15Z DEBUG codelldb::wire_protocol] --> {"command":"setBreakpoints","arguments":{"source":{"name":"TransferFunctionLibrary.hpp","path":"/home/robin/dev//node-corelib/-corelib/include/library/TransferFunctionLibrary.hpp"},"lines":[62],"breakpoints":[{"line":62}],"sourceModified":false},"type":"request","seq":5} [2020-01-17T07:46:15Z DEBUG codelldb::wire_protocol] <-- {"type":"response","request_seq":5,"success":true,"command":"setBreakpoints","body":{"breakpoints":[{"id":1,"message":"Locations: 0","verified":false}]}} [2020-01-17T07:46:15Z DEBUG codelldb::wire_protocol] --> {"command":"setFunctionBreakpoints","arguments":{"breakpoints":[]},"type":"request","seq":6} [2020-01-17T07:46:15Z DEBUG codelldb::wire_protocol] <-- {"type":"response","request_seq":6,"success":true,"command":"setFunctionBreakpoints","body":{"breakpoints":[]}} [2020-01-17T07:46:15Z DEBUG codelldb::wire_protocol] --> {"command":"setDataBreakpoints","arguments":{"breakpoints":[]},"type":"request","seq":7} [2020-01-17T07:46:15Z DEBUG codelldb::wire_protocol] <-- {"type":"response","request_seq":7,"success":true,"command":"setDataBreakpoints","body":{"breakpoints":[]}} [2020-01-17T07:46:15Z DEBUG codelldb::wire_protocol] --> {"command":"setExceptionBreakpoints","arguments":{"filters":["cpp_throw"]},"type":"request","seq":8} [2020-01-17T07:46:15Z DEBUG codelldb::wire_protocol] <-- {"type":"response","request_seq":8,"success":true,"command":"setExceptionBreakpoints"} [2020-01-17T07:46:15Z DEBUG codelldb::wire_protocol] --> {"command":"configurationDone","type":"request","seq":9} [2020-01-17T07:46:16Z ERROR codelldb::debug_session] Could not send event to DebugSession: "Full(..)" [2020-01-17T07:46:16Z ERROR codelldb::debug_session] Could not send event to DebugSession: "Full(..)" [2020-01-17T07:46:16Z ERROR codelldb::debug_session] Could not send event to DebugSession: "Full(..)" [2020-01-17T07:46:16Z ERROR codelldb::debug_session] Could not send event to DebugSession: "Full(..)" [2020-01-17T07:46:16Z ERROR codelldb::debug_session] Could not send event to DebugSession: "Full(..)" [2020-01-17T07:46:16Z ERROR codelldb::debug_session] Could not send event to DebugSession: "Full(..)" [2020-01-17T07:46:16Z ERROR codelldb::debug_session] Could not send event to DebugSession: "Full(..)" [2020-01-17T07:46:16Z ERROR codelldb::debug_session] Could not send event to DebugSession: "Full(..)" [2020-01-17T07:46:16Z ERROR codelldb::debug_session] Could not send event to DebugSession: "Full(..)" [2020-01-17T07:46:16Z DEBUG codelldb::debug_session] Debug event: 0x7f7258043360 Event: broadcaster = 0x7f7270001560 (lldb.target), type = 0x00000002 (modules-loaded), data = {electron} [2020-01-17T07:46:16Z DEBUG codelldb::wire_protocol] <-- {"type":"response","request_seq":2,"success":true,"command":"attach"} [2020-01-17T07:46:16Z DEBUG codelldb::wire_protocol] <-- {"type":"response","request_seq":9,"success":true,"command":"configurationDone"} [2020-01-17T07:46:16Z DEBUG codelldb::debug_session] Debug event: 0x7f72580241e0 Event: broadcaster = 0x7f7270001560 (lldb.target), type = 0x00000002 (modules-loaded), data = {[vdso](0x00007ffce3122000)} [2020-01-17T07:46:16Z DEBUG codelldb::wire_protocol] <-- {"type":"event","seq":2,"event":"module","body":{"module":{"addressRange":"5638CD3FD000","id":"5638CD3FD000","name":"electron","path":"/home/robin/dev//ReproAlex16012020/-ui/node_modules/electron/dist/electron","symbolFilePath":"/home/robin/dev//ReproAlex16012020/-ui/node_modules/electron/dist/electron","symbolStatus":"Symbols loaded."},"reason":"new"}} [2020-01-17T07:46:16Z DEBUG codelldb::wire_protocol] <-- {"type":"event","seq":3,"event":"module","body":{"module":{"addressRange":"7FFCE3122000","id":"7FFCE3122000","name":"[vdso]","path":"[vdso]","symbolStatus":"Symbols not found"},"reason":"new"}} [2020-01-17T07:46:16Z DEBUG codelldb::debug_session] Debug event: 0x7f72580e9ab0 Event: broadcaster = 0x7f7270001560 (lldb.target), type = 0x00000002 (modules-loaded), data = {libffmpeg.so} [2020-01-17T07:46:16Z DEBUG codelldb::debug_session] Debug event: 0x7f7258462e40 Event: broadcaster = 0x7f7270001560 (lldb.target), type = 0x00000002 (modules-loaded), data = {libdl.so.2} [2020-01-17T07:46:16Z DEBUG codelldb::wire_protocol] <-- {"type":"event","seq":4,"event":"module","body":{"module":{"addressRange":"7F79CDDB0000","id":"7F79CDDB0000","name":"libffmpeg.so","path":"/home/robin/dev//ReproAlex16012020/-ui/node_modules/electron/dist/libffmpeg.so","symbolFilePath":"/home/robin/dev//ReproAlex16012020/-ui/node_modules/electron/dist/libffmpeg.so","symbolStatus":"Symbols loaded."},"reason":"new"}} [2020-01-17T07:46:16Z DEBUG codelldb::wire_protocol] <-- {"type":"event","seq":5,"event":"module","body":{"module":{"addressRange":"7F79CDD88000","id":"7F79CDD88000","name":"libdl.so.2","path":"/lib/x86_64-linux-gnu/libdl.so.2","symbolFilePath":"/lib/x86_64-linux-gnu/libdl.so.2","symbolStatus":"Symbols loaded."},"reason":"new"}} [2020-01-17T07:46:16Z DEBUG codelldb::debug_session] Debug event: 0x7f72587c8d10 Event: broadcaster = 0x7f7270001560 (lldb.target), type = 0x00000002 (modules-loaded), data = {libpthread.so.0} [2020-01-17T07:46:16Z DEBUG codelldb::debug_session] Debug event: 0x7f72587d0970 Event: broadcaster = 0x7f7270001560 (lldb.target), type = 0x00000002 (modules-loaded), data = {librt.so.1} [2020-01-17T07:46:16Z DEBUG codelldb::wire_protocol] <-- {"type":"event","seq":6,"event":"module","body":{"module":{"addressRange":"7F79CDD65000","id":"7F79CDD65000","name":"libpthread.so.0","path":"/lib/x86_64-linux-gnu/libpthread.so.0","symbolFilePath":"/usr/lib/debug/.build-id/39/560457911d968d9e06088da015970b0018153f.debug","symbolStatus":"Symbols loaded."},"reason":"new"}} [2020-01-17T07:46:16Z DEBUG codelldb::debug_session] Debug event: 0x7f72587da0c0 Event: broadcaster = 0x7f7270001560 (lldb.target), type = 0x00000002 (modules-loaded), data = {libgobject-2.0.so.0} [2020-01-17T07:46:16Z DEBUG codelldb::wire_protocol] <-- {"type":"event","seq":7,"event":"module","body":{"module":{"addressRange":"7F79CDD5A000","id":"7F79CDD5A000","name":"librt.so.1","path":"/lib/x86_64-linux-gnu/librt.so.1","symbolFilePath":"/lib/x86_64-linux-gnu/librt.so.1","symbolStatus":"Symbols loaded."},"reason":"new"}} [2020-01-17T07:46:16Z DEBUG codelldb::debug_session] Debug event: 0x7f7258802590 Event: broadcaster = 0x7f7270001560 (lldb.target), type = 0x00000002 (modules-loaded), data = {libglib-2.0.so.0} [2020-01-17T07:46:16Z DEBUG codelldb::wire_protocol] <-- {"type":"event","seq":8,"event":"module","body":{"module":{"addressRange":"7F79CDCFD000","id":"7F79CDCFD000","name":"libgobject-2.0.so.0","path":"/lib/x86_64-linux-gnu/libgobject-2.0.so.0","symbolFilePath":"/lib/x86_64-linux-gnu/libgobject-2.0.so.0","symbolStatus":"Symbols loaded."},"reason":"new"}} [2020-01-17T07:46:16Z DEBUG codelldb::debug_session] Debug event: 0x7f725880c6a0 Event: broadcaster = 0x7f7270001560 (lldb.target), type = 0x00000002 (modules-loaded), data = {libgio-2.0.so.0} [2020-01-17T07:46:16Z DEBUG codelldb::wire_protocol] <-- {"type":"event","seq":9,"event":"module","body":{"module":{"addressRange":"7F79CDBD5000","id":"7F79CDBD5000","name":"libglib-2.0.so.0","path":"/lib/x86_64-linux-gnu/libglib-2.0.so.0","symbolFilePath":"/lib/x86_64-linux-gnu/libglib-2.0.so.0","symbolStatus":"Symbols loaded."},"reason":"new"}} [2020-01-17T07:46:16Z DEBUG codelldb::debug_session] Debug event: 0x7f72588b7d10 Event: broadcaster = 0x7f7270001560 (lldb.target), type = 0x00000002 (modules-loaded), data = {libX11.so.6} [2020-01-17T07:46:16Z DEBUG codelldb::wire_protocol] <-- {"type":"event","seq":10,"event":"module","body":{"module":{"addressRange":"7F79CD9F8000","id":"7F79CD9F8000","name":"libgio-2.0.so.0","path":"/lib/x86_64-linux-gnu/libgio-2.0.so.0","symbolFilePath":"/lib/x86_64-linux-gnu/libgio-2.0.so.0","symbolStatus":"Symbols loaded."},"reason":"new"}} [2020-01-17T07:46:16Z DEBUG codelldb::debug_session] Debug event: 0x7f7258334580 Event: broadcaster = 0x7f7270001560 (lldb.target), type = 0x00000002 (modules-loaded), data = {libX11-xcb.so.1} [2020-01-17T07:46:16Z DEBUG codelldb::wire_protocol] <-- {"type":"event","seq":11,"event":"module","body":{"module":{"addressRange":"7F79CD8BA000","id":"7F79CD8BA000","name":"libX11.so.6","path":"/lib/x86_64-linux-gnu/libX11.so.6","symbolFilePath":"/lib/x86_64-linux-gnu/libX11.so.6","symbolStatus":"Symbols loaded."},"reason":"new"}} [2020-01-17T07:46:16Z DEBUG codelldb::debug_session] Debug event: 0x7f7258353670 Event: broadcaster = 0x7f7270001560 (lldb.target), type = 0x00000002 (modules-loaded), data = {libxcb.so.1} [2020-01-17T07:46:16Z DEBUG codelldb::wire_protocol] <-- {"type":"event","seq":12,"event":"module","body":{"module":{"addressRange":"7F79CD8B5000","id":"7F79CD8B5000","name":"libX11-xcb.so.1","path":"/lib/x86_64-linux-gnu/libX11-xcb.so.1","symbolFilePath":"/lib/x86_64-linux-gnu/libX11-xcb.so.1","symbolStatus":"Symbols loaded."},"reason":"new"}} [2020-01-17T07:46:16Z DEBUG codelldb::debug_session] Debug event: 0x7f72582a67b0 Event: broadcaster = 0x7f7270001560 (lldb.target), type = 0x00000002 (modules-loaded), data = {libXcomposite.so.1} [2020-01-17T07:46:16Z DEBUG codelldb::wire_protocol] <-- {"type":"event","seq":13,"event":"module","body":{"module":{"addressRange":"7F79CD88C000","id":"7F79CD88C000","name":"libxcb.so.1","path":"/lib/x86_64-linux-gnu/libxcb.so.1","symbolFilePath":"/lib/x86_64-linux-gnu/libxcb.so.1","symbolStatus":"Symbols loaded."},"reason":"new"}} [2020-01-17T07:46:16Z DEBUG codelldb::debug_session] Debug event: 0x7f72580799d0 Event: broadcaster = 0x7f7270001560 (lldb.target), type = 0x00000002 (modules-loaded), data = {libXcursor.so.1} [2020-01-17T07:46:16Z DEBUG codelldb::wire_protocol] <-- {"type":"event","seq":14,"event":"module","body":{"module":{"addressRange":"7F79CD887000","id":"7F79CD887000","name":"libXcomposite.so.1","path":"/lib/x86_64-linux-gnu/libXcomposite.so.1","symbolFilePath":"/lib/x86_64-linux-gnu/libXcomposite.so.1","symbolStatus":"Symbols loaded."},"reason":"new"}} [2020-01-17T07:46:16Z DEBUG codelldb::debug_session] Debug event: 0x7f72582c35b0 Event: broadcaster = 0x7f7270001560 (lldb.target), type = 0x00000002 (modules-loaded), data = {libXdamage.so.1} [2020-01-17T07:46:16Z DEBUG codelldb::wire_protocol] <-- {"type":"event","seq":15,"event":"module","body":{"module":{"addressRange":"7F79CD87A000","id":"7F79CD87A000","name":"libXcursor.so.1","path":"/lib/x86_64-linux-gnu/libXcursor.so.1","symbolFilePath":"/lib/x86_64-linux-gnu/libXcursor.so.1","symbolStatus":"Symbols loaded."},"reason":"new"}} [2020-01-17T07:46:16Z DEBUG codelldb::wire_protocol] <-- {"type":"event","seq":16,"event":"module","body":{"module":{"addressRange":"7F79CD873000","id":"7F79CD873000","name":"libXdamage.so.1","path":"/lib/x86_64-linux-gnu/libXdamage.so.1","symbolFilePath":"/lib/x86_64-linux-gnu/libXdamage.so.1","symbolStatus":"Symbols loaded."},"reason":"new"}} [2020-01-17T07:46:16Z DEBUG codelldb::debug_session] Debug event: 0x7f7258049350 Event: broadcaster = 0x7f7270001560 (lldb.target), type = 0x00000002 (modules-loaded), data = {libXext.so.6} [2020-01-17T07:46:16Z DEBUG codelldb::debug_session] Debug event: 0x7f72583f8e00 Event: broadcaster = 0x7f7270001560 (lldb.target), type = 0x00000002 (modules-loaded), data = {libXfixes.so.3} [2020-01-17T07:46:16Z DEBUG codelldb::wire_protocol] <-- {"type":"event","seq":17,"event":"module","body":{"module":{"addressRange":"7F79CD85E000","id":"7F79CD85E000","name":"libXext.so.6","path":"/lib/x86_64-linux-gnu/libXext.so.6","symbolFilePath":"/lib/x86_64-linux-gnu/libXext.so.6","symbolStatus":"Symbols loaded."},"reason":"new"}} [2020-01-17T07:46:16Z DEBUG codelldb::wire_protocol] <-- {"type":"event","seq":18,"event":"module","body":{"module":{"addressRange":"7F79CD658000","id":"7F79CD658000","name":"libXfixes.so.3","path":"/lib/x86_64-linux-gnu/libXfixes.so.3","symbolFilePath":"/lib/x86_64-linux-gnu/libXfixes.so.3","symbolStatus":"Symbols loaded."},"reason":"new"}} [2020-01-17T07:46:16Z DEBUG codelldb::debug_session] Debug event: 0x7f72582d4fd0 Event: broadcaster = 0x7f7270001560 (lldb.target), type = 0x00000002 (modules-loaded), data = {libXi.so.6} [2020-01-17T07:46:16Z DEBUG codelldb::debug_session] Debug event: 0x7f72583041e0 Event: broadcaster = 0x7f7270001560 (lldb.target), type = 0x00000002 (modules-loaded), data = {libXrender.so.1} [2020-01-17T07:46:16Z DEBUG codelldb::wire_protocol] <-- {"type":"event","seq":19,"event":"module","body":{"module":{"addressRange":"7F79CD646000","id":"7F79CD646000","name":"libXi.so.6","path":"/lib/x86_64-linux-gnu/libXi.so.6","symbolFilePath":"/lib/x86_64-linux-gnu/libXi.so.6","symbolStatus":"Symbols loaded."},"reason":"new"}} [2020-01-17T07:46:16Z DEBUG codelldb::debug_session] Debug event: 0x7f725836ec40 Event: broadcaster = 0x7f7270001560 (lldb.target), type = 0x00000002 (modules-loaded), data = {libXtst.so.6} [2020-01-17T07:46:16Z DEBUG codelldb::wire_protocol] <-- {"type":"event","seq":20,"event":"module","body":{"module":{"addressRange":"7F79CD43C000","id":"7F79CD43C000","name":"libXrender.so.1","path":"/lib/x86_64-linux-gnu/libXrender.so.1","symbolFilePath":"/lib/x86_64-linux-gnu/libXrender.so.1","symbolStatus":"Symbols loaded."},"reason":"new"}} [2020-01-17T07:46:16Z DEBUG codelldb::debug_session] Debug event: 0x7f72582c9c00 Event: broadcaster = 0x7f7270001560 (lldb.target), type = 0x00000002 (modules-loaded), data = {libnss3.so} [2020-01-17T07:46:16Z DEBUG codelldb::wire_protocol] <-- {"type":"event","seq":21,"event":"module","body":{"module":{"addressRange":"7F79CD236000","id":"7F79CD236000","name":"libXtst.so.6","path":"/lib/x86_64-linux-gnu/libXtst.so.6","symbolFilePath":"/lib/x86_64-linux-gnu/libXtst.so.6","symbolStatus":"Symbols loaded."},"reason":"new"}} [2020-01-17T07:46:16Z DEBUG codelldb::debug_session] Debug event: 0x7f7258c3ce50 Event: broadcaster = 0x7f7270001560 (lldb.target), type = 0x00000002 (modules-loaded), data = {libnssutil3.so} [2020-01-17T07:46:16Z DEBUG codelldb::wire_protocol] <-- {"type":"event","seq":22,"event":"module","body":{"module":{"addressRange":"7F79CD0E6000","id":"7F79CD0E6000","name":"libnss3.so","path":"/lib/x86_64-linux-gnu/libnss3.so","symbolFilePath":"/lib/x86_64-linux-gnu/libnss3.so","symbolStatus":"Symbols loaded."},"reason":"new"}} [2020-01-17T07:46:16Z DEBUG codelldb::debug_session] Debug event: 0x7f72581c9f30 Event: broadcaster = 0x7f7270001560 (lldb.target), type = 0x00000002 (modules-loaded), data = {libsmime3.so} [2020-01-17T07:46:16Z DEBUG codelldb::wire_protocol] <-- {"type":"event","seq":23,"event":"module","body":{"module":{"addressRange":"7F79CD0B3000","id":"7F79CD0B3000","name":"libnssutil3.so","path":"/lib/x86_64-linux-gnu/libnssutil3.so","symbolFilePath":"/lib/x86_64-linux-gnu/libnssutil3.so","symbolStatus":"Symbols loaded."},"reason":"new"}} [2020-01-17T07:46:16Z DEBUG codelldb::debug_session] Debug event: 0x7f725835f4e0 Event: broadcaster = 0x7f7270001560 (lldb.target), type = 0x00000002 (modules-loaded), data = {libnspr4.so} [2020-01-17T07:46:16Z DEBUG codelldb::wire_protocol] <-- {"type":"event","seq":24,"event":"module","body":{"module":{"addressRange":"7F79CD083000","id":"7F79CD083000","name":"libsmime3.so","path":"/lib/x86_64-linux-gnu/libsmime3.so","symbolFilePath":"/lib/x86_64-linux-gnu/libsmime3.so","symbolStatus":"Symbols loaded."},"reason":"new"}} [2020-01-17T07:46:16Z DEBUG codelldb::debug_session] Debug event: 0x7f72580888b0 Event: broadcaster = 0x7f7270001560 (lldb.target), type = 0x00000002 (modules-loaded), data = {libgdk_pixbuf-2.0.so.0} [2020-01-17T07:46:16Z DEBUG codelldb::wire_protocol] <-- {"type":"event","seq":25,"event":"module","body":{"module":{"addressRange":"7F79CD043000","id":"7F79CD043000","name":"libnspr4.so","path":"/lib/x86_64-linux-gnu/libnspr4.so","symbolFilePath":"/lib/x86_64-linux-gnu/libnspr4.so","symbolStatus":"Symbols loaded."},"reason":"new"}} [2020-01-17T07:46:16Z DEBUG codelldb::debug_session] Debug event: 0x7f72581e59d0 Event: broadcaster = 0x7f7270001560 (lldb.target), type = 0x00000002 (modules-loaded), data = {libgtk-3.so.0} [2020-01-17T07:46:16Z DEBUG codelldb::wire_protocol] <-- {"type":"event","seq":26,"event":"module","body":{"module":{"addressRange":"7F79CD01B000","id":"7F79CD01B000","name":"libgdk_pixbuf-2.0.so.0","path":"/lib/x86_64-linux-gnu/libgdk_pixbuf-2.0.so.0","symbolFilePath":"/lib/x86_64-linux-gnu/libgdk_pixbuf-2.0.so.0","symbolStatus":"Symbols loaded."},"reason":"new"}} [2020-01-17T07:46:16Z DEBUG codelldb::debug_session] Debug event: 0x7f725832d050 Event: broadcaster = 0x7f7270001560 (lldb.target), type = 0x00000002 (modules-loaded), data = {libgdk-3.so.0} [2020-01-17T07:46:16Z DEBUG codelldb::wire_protocol] <-- {"type":"event","seq":27,"event":"module","body":{"module":{"addressRange":"7F79CC908000","id":"7F79CC908000","name":"libgtk-3.so.0","path":"/lib/x86_64-linux-gnu/libgtk-3.so.0","symbolFilePath":"/lib/x86_64-linux-gnu/libgtk-3.so.0","symbolStatus":"Symbols loaded."},"reason":"new"}} [2020-01-17T07:46:16Z DEBUG codelldb::debug_session] Debug event: 0x7f72580fb470 Event: broadcaster = 0x7f7270001560 (lldb.target), type = 0x00000002 (modules-loaded), data = {libpangocairo-1.0.so.0} [2020-01-17T07:46:16Z DEBUG codelldb::wire_protocol] <-- {"type":"event","seq":28,"event":"module","body":{"module":{"addressRange":"7F79CC802000","id":"7F79CC802000","name":"libgdk-3.so.0","path":"/lib/x86_64-linux-gnu/libgdk-3.so.0","symbolFilePath":"/lib/x86_64-linux-gnu/libgdk-3.so.0","symbolStatus":"Symbols loaded."},"reason":"new"}} [2020-01-17T07:46:16Z DEBUG codelldb::debug_session] Debug event: 0x7f72582f3a40 Event: broadcaster = 0x7f7270001560 (lldb.target), type = 0x00000002 (modules-loaded), data = {libpango-1.0.so.0} [2020-01-17T07:46:16Z DEBUG codelldb::wire_protocol] <-- {"type":"event","seq":29,"event":"module","body":{"module":{"addressRange":"7F79CC7F1000","id":"7F79CC7F1000","name":"libpangocairo-1.0.so.0","path":"/lib/x86_64-linux-gnu/libpangocairo-1.0.so.0","symbolFilePath":"/lib/x86_64-linux-gnu/libpangocairo-1.0.so.0","symbolStatus":"Symbols loaded."},"reason":"new"}} [2020-01-17T07:46:16Z DEBUG codelldb::debug_session] Debug event: 0x7f7258052b20 Event: broadcaster = 0x7f7270001560 (lldb.target), type = 0x00000002 (modules-loaded), data = {libatk-1.0.so.0} [2020-01-17T07:46:16Z DEBUG codelldb::wire_protocol] <-- {"type":"event","seq":30,"event":"module","body":{"module":{"addressRange":"7F79CC7A5000","id":"7F79CC7A5000","name":"libpango-1.0.so.0","path":"/lib/x86_64-linux-gnu/libpango-1.0.so.0","symbolFilePath":"/lib/x86_64-linux-gnu/libpango-1.0.so.0","symbolStatus":"Symbols loaded."},"reason":"new"}} [2020-01-17T07:46:16Z DEBUG codelldb::debug_session] Debug event: 0x7f725818e5a0 Event: broadcaster = 0x7f7270001560 (lldb.target), type = 0x00000002 (modules-loaded), data = {libcairo.so.2} [2020-01-17T07:46:16Z DEBUG codelldb::wire_protocol] <-- {"type":"event","seq":31,"event":"module","body":{"module":{"addressRange":"7F79CC77B000","id":"7F79CC77B000","name":"libatk-1.0.so.0","path":"/lib/x86_64-linux-gnu/libatk-1.0.so.0","symbolFilePath":"/lib/x86_64-linux-gnu/libatk-1.0.so.0","symbolStatus":"Symbols loaded."},"reason":"new"}} [2020-01-17T07:46:16Z DEBUG codelldb::debug_session] Debug event: 0x7f72580a1930 Event: broadcaster = 0x7f7270001560 (lldb.target), type = 0x00000002 (modules-loaded), data = {libdbus-1.so.3} [2020-01-17T07:46:16Z DEBUG codelldb::wire_protocol] <-- {"type":"event","seq":32,"event":"module","body":{"module":{"addressRange":"7F79CC65B000","id":"7F79CC65B000","name":"libcairo.so.2","path":"/lib/x86_64-linux-gnu/libcairo.so.2","symbolFilePath":"/lib/x86_64-linux-gnu/libcairo.so.2","symbolStatus":"Symbols loaded."},"reason":"new"}} [2020-01-17T07:46:16Z DEBUG codelldb::debug_session] Debug event: 0x7f725831e640 Event: broadcaster = 0x7f7270001560 (lldb.target), type = 0x00000002 (modules-loaded), data = {libexpat.so.1} [2020-01-17T07:46:16Z DEBUG codelldb::wire_protocol] <-- {"type":"event","seq":33,"event":"module","body":{"module":{"addressRange":"7F79CC60C000","id":"7F79CC60C000","name":"libdbus-1.so.3","path":"/lib/x86_64-linux-gnu/libdbus-1.so.3","symbolFilePath":"/lib/x86_64-linux-gnu/libdbus-1.so.3","symbolStatus":"Symbols loaded."},"reason":"new"}} [2020-01-17T07:46:16Z DEBUG codelldb::debug_session] Debug event: 0x7f725824ef30 Event: broadcaster = 0x7f7270001560 (lldb.target), type = 0x00000002 (modules-loaded), data = {libuuid.so.1} [2020-01-17T07:46:16Z DEBUG codelldb::wire_protocol] <-- {"type":"event","seq":34,"event":"module","body":{"module":{"addressRange":"7F79CC5DC000","id":"7F79CC5DC000","name":"libexpat.so.1","path":"/lib/x86_64-linux-gnu/libexpat.so.1","symbolFilePath":"/lib/x86_64-linux-gnu/libexpat.so.1","symbolStatus":"Symbols loaded."},"reason":"new"}} [2020-01-17T07:46:16Z DEBUG codelldb::debug_session] Debug event: 0x7f7258352ca0 Event: broadcaster = 0x7f7270001560 (lldb.target), type = 0x00000002 (modules-loaded), data = {libXrandr.so.2} [2020-01-17T07:46:16Z DEBUG codelldb::wire_protocol] <-- {"type":"event","seq":35,"event":"module","body":{"module":{"addressRange":"7F79CC5D3000","id":"7F79CC5D3000","name":"libuuid.so.1","path":"/lib/x86_64-linux-gnu/libuuid.so.1","symbolFilePath":"/lib/x86_64-linux-gnu/libuuid.so.1","symbolStatus":"Symbols loaded."},"reason":"new"}} [2020-01-17T07:46:16Z DEBUG codelldb::wire_protocol] <-- {"type":"event","seq":36,"event":"module","body":{"module":{"addressRange":"7F79CC5C6000","id":"7F79CC5C6000","name":"libXrandr.so.2","path":"/lib/x86_64-linux-gnu/libXrandr.so.2","symbolFilePath":"/lib/x86_64-linux-gnu/libXrandr.so.2","symbolStatus":"Symbols loaded."},"reason":"new"}} [2020-01-17T07:46:16Z DEBUG codelldb::debug_session] Debug event: 0x7f7258202070 Event: broadcaster = 0x7f7270001560 (lldb.target), type = 0x00000002 (modules-loaded), data = {libXss.so.1} [2020-01-17T07:46:16Z DEBUG codelldb::debug_session] Debug event: 0x7f725829df10 Event: broadcaster = 0x7f7270001560 (lldb.target), type = 0x00000002 (modules-loaded), data = {libasound.so.2} [2020-01-17T07:46:16Z DEBUG codelldb::wire_protocol] <-- {"type":"event","seq":37,"event":"module","body":{"module":{"addressRange":"7F79CC5C1000","id":"7F79CC5C1000","name":"libXss.so.1","path":"/lib/x86_64-linux-gnu/libXss.so.1","symbolFilePath":"/lib/x86_64-linux-gnu/libXss.so.1","symbolStatus":"Symbols loaded."},"reason":"new"}} [2020-01-17T07:46:16Z DEBUG codelldb::wire_protocol] <-- {"type":"event","seq":38,"event":"module","body":{"module":{"addressRange":"7F79CC4C2000","id":"7F79CC4C2000","name":"libasound.so.2","path":"/lib/x86_64-linux-gnu/libasound.so.2","symbolFilePath":"/lib/x86_64-linux-gnu/libasound.so.2","symbolStatus":"Symbols loaded."},"reason":"new"}} [2020-01-17T07:46:16Z DEBUG codelldb::debug_session] Debug event: 0x7f725835ce40 Event: broadcaster = 0x7f7270001560 (lldb.target), type = 0x00000002 (modules-loaded), data = {libatk-bridge-2.0.so.0} [2020-01-17T07:46:16Z DEBUG codelldb::debug_session] Debug event: 0x7f725824a3d0 Event: broadcaster = 0x7f7270001560 (lldb.target), type = 0x00000002 (modules-loaded), data = {libm.so.6} [2020-01-17T07:46:16Z DEBUG codelldb::wire_protocol] <-- {"type":"event","seq":39,"event":"module","body":{"module":{"addressRange":"7F79CC48C000","id":"7F79CC48C000","name":"libatk-bridge-2.0.so.0","path":"/lib/x86_64-linux-gnu/libatk-bridge-2.0.so.0","symbolFilePath":"/lib/x86_64-linux-gnu/libatk-bridge-2.0.so.0","symbolStatus":"Symbols loaded."},"reason":"new"}} [2020-01-17T07:46:16Z DEBUG codelldb::debug_session] Debug event: 0x7f7259330760 Event: broadcaster = 0x7f7270001560 (lldb.target), type = 0x00000002 (modules-loaded), data = {libatspi.so.0} [2020-01-17T07:46:16Z DEBUG codelldb::wire_protocol] <-- {"type":"event","seq":40,"event":"module","body":{"module":{"addressRange":"7F79CC33B000","id":"7F79CC33B000","name":"libm.so.6","path":"/lib/x86_64-linux-gnu/libm.so.6","symbolFilePath":"/lib/x86_64-linux-gnu/libm.so.6","symbolStatus":"Symbols loaded."},"reason":"new"}} [2020-01-17T07:46:16Z DEBUG codelldb::debug_session] Debug event: 0x7f725925cf70 Event: broadcaster = 0x7f7270001560 (lldb.target), type = 0x00000002 (modules-loaded), data = {libcups.so.2} [2020-01-17T07:46:16Z DEBUG codelldb::wire_protocol] <-- {"type":"event","seq":41,"event":"module","body":{"module":{"addressRange":"7F79CC304000","id":"7F79CC304000","name":"libatspi.so.0","path":"/lib/x86_64-linux-gnu/libatspi.so.0","symbolFilePath":"/lib/x86_64-linux-gnu/libatspi.so.0","symbolStatus":"Symbols loaded."},"reason":"new"}} [2020-01-17T07:46:16Z DEBUG codelldb::debug_session] Debug event: 0x7f72582b25d0 Event: broadcaster = 0x7f7270001560 (lldb.target), type = 0x00000002 (modules-loaded), data = {libgcc_s.so.1} [2020-01-17T07:46:16Z DEBUG codelldb::wire_protocol] <-- {"type":"event","seq":42,"event":"module","body":{"module":{"addressRange":"7F79CC273000","id":"7F79CC273000","name":"libcups.so.2","path":"/lib/x86_64-linux-gnu/libcups.so.2","symbolFilePath":"/lib/x86_64-linux-gnu/libcups.so.2","symbolStatus":"Symbols loaded."},"reason":"new"}} [2020-01-17T07:46:16Z DEBUG codelldb::debug_session] Debug event: 0x7f7258196740 Event: broadcaster = 0x7f7270001560 (lldb.target), type = 0x00000002 (modules-loaded), data = {libc.so.6} [2020-01-17T07:46:16Z DEBUG codelldb::wire_protocol] <-- {"type":"event","seq":43,"event":"module","body":{"module":{"addressRange":"7F79CC259000","id":"7F79CC259000","name":"libgcc_s.so.1","path":"/lib/x86_64-linux-gnu/libgcc_s.so.1","symbolFilePath":"/lib/x86_64-linux-gnu/libgcc_s.so.1","symbolStatus":"Symbols loaded."},"reason":"new"}} [2020-01-17T07:46:16Z DEBUG codelldb::debug_session] Debug event: 0x7f7258957c80 Event: broadcaster = 0x7f7270001560 (lldb.target), type = 0x00000002 (modules-loaded), data = {ld-linux-x86-64.so.2} [2020-01-17T07:46:16Z DEBUG codelldb::wire_protocol] <-- {"type":"event","seq":44,"event":"module","body":{"module":{"addressRange":"7F79CC068000","id":"7F79CC068000","name":"libc.so.6","path":"/lib/x86_64-linux-gnu/libc.so.6","symbolFilePath":"/lib/x86_64-linux-gnu/libc.so.6","symbolStatus":"Symbols loaded."},"reason":"new"}} [2020-01-17T07:46:16Z DEBUG codelldb::debug_session] Debug event: 0x7f7258b76bb0 Event: broadcaster = 0x7f7270001560 (lldb.target), type = 0x00000002 (modules-loaded), data = {libffi.so.6} [2020-01-17T07:46:16Z DEBUG codelldb::wire_protocol] <-- {"type":"event","seq":45,"event":"module","body":{"module":{"addressRange":"7F79CE286000","id":"7F79CE286000","name":"ld-linux-x86-64.so.2","path":"/lib64/ld-linux-x86-64.so.2","symbolFilePath":"/lib64/ld-linux-x86-64.so.2","symbolStatus":"Symbols loaded."},"reason":"new"}} [2020-01-17T07:46:16Z DEBUG codelldb::debug_session] Debug event: 0x7f7258e29de0 Event: broadcaster = 0x7f7270001560 (lldb.target), type = 0x00000002 (modules-loaded), data = {libpcre.so.3} [2020-01-17T07:46:16Z DEBUG codelldb::wire_protocol] <-- {"type":"event","seq":46,"event":"module","body":{"module":{"addressRange":"7F79CC05E000","id":"7F79CC05E000","name":"libffi.so.6","path":"/lib/x86_64-linux-gnu/libffi.so.6","symbolFilePath":"/lib/x86_64-linux-gnu/libffi.so.6","symbolStatus":"Symbols loaded."},"reason":"new"}} [2020-01-17T07:46:16Z DEBUG codelldb::debug_session] Debug event: 0x7f72587fc270 Event: broadcaster = 0x7f7270001560 (lldb.target), type = 0x00000002 (modules-loaded), data = {libgmodule-2.0.so.0} [2020-01-17T07:46:16Z DEBUG codelldb::wire_protocol] <-- {"type":"event","seq":47,"event":"module","body":{"module":{"addressRange":"7F79CBFE8000","id":"7F79CBFE8000","name":"libpcre.so.3","path":"/lib/x86_64-linux-gnu/libpcre.so.3","symbolFilePath":"/lib/x86_64-linux-gnu/libpcre.so.3","symbolStatus":"Symbols loaded."},"reason":"new"}} [2020-01-17T07:46:16Z DEBUG codelldb::debug_session] Debug event: 0x7f72588683a0 Event: broadcaster = 0x7f7270001560 (lldb.target), type = 0x00000002 (modules-loaded), data = {libz.so.1} [2020-01-17T07:46:16Z DEBUG codelldb::wire_protocol] <-- {"type":"event","seq":48,"event":"module","body":{"module":{"addressRange":"7F79CBFE2000","id":"7F79CBFE2000","name":"libgmodule-2.0.so.0","path":"/lib/x86_64-linux-gnu/libgmodule-2.0.so.0","symbolFilePath":"/lib/x86_64-linux-gnu/libgmodule-2.0.so.0","symbolStatus":"Symbols loaded."},"reason":"new"}} [2020-01-17T07:46:16Z DEBUG codelldb::debug_session] Debug event: 0x7f7259490690 Event: broadcaster = 0x7f7270001560 (lldb.target), type = 0x00000002 (modules-loaded), data = {libmount.so.1} [2020-01-17T07:46:16Z DEBUG codelldb::wire_protocol] <-- {"type":"event","seq":49,"event":"module","body":{"module":{"addressRange":"7F79CBFC6000","id":"7F79CBFC6000","name":"libz.so.1","path":"/lib/x86_64-linux-gnu/libz.so.1","symbolFilePath":"/lib/x86_64-linux-gnu/libz.so.1","symbolStatus":"Symbols loaded."},"reason":"new"}} [2020-01-17T07:46:16Z DEBUG codelldb::debug_session] Debug event: 0x7f72594b1ba0 Event: broadcaster = 0x7f7270001560 (lldb.target), type = 0x00000002 (modules-loaded), data = {libselinux.so.1} [2020-01-17T07:46:16Z DEBUG codelldb::wire_protocol] <-- {"type":"event","seq":50,"event":"module","body":{"module":{"addressRange":"7F79CBF66000","id":"7F79CBF66000","name":"libmount.so.1","path":"/lib/x86_64-linux-gnu/libmount.so.1","symbolFilePath":"/lib/x86_64-linux-gnu/libmount.so.1","symbolStatus":"Symbols loaded."},"reason":"new"}} [2020-01-17T07:46:16Z DEBUG codelldb::debug_session] Debug event: 0x7f72594bc290 Event: broadcaster = 0x7f7270001560 (lldb.target), type = 0x00000002 (modules-loaded), data = {libresolv.so.2} [2020-01-17T07:46:16Z DEBUG codelldb::wire_protocol] <-- {"type":"event","seq":51,"event":"module","body":{"module":{"addressRange":"7F79CBF3B000","id":"7F79CBF3B000","name":"libselinux.so.1","path":"/lib/x86_64-linux-gnu/libselinux.so.1","symbolFilePath":"/lib/x86_64-linux-gnu/libselinux.so.1","symbolStatus":"Symbols loaded."},"reason":"new"}} [2020-01-17T07:46:16Z DEBUG codelldb::debug_session] Debug event: 0x7f7259510ad0 Event: broadcaster = 0x7f7270001560 (lldb.target), type = 0x00000002 (modules-loaded), data = {libXau.so.6} [2020-01-17T07:46:16Z DEBUG codelldb::wire_protocol] <-- {"type":"event","seq":52,"event":"module","body":{"module":{"addressRange":"7F79CBF20000","id":"7F79CBF20000","name":"libresolv.so.2","path":"/lib/x86_64-linux-gnu/libresolv.so.2","symbolFilePath":"/lib/x86_64-linux-gnu/libresolv.so.2","symbolStatus":"Symbols loaded."},"reason":"new"}} [2020-01-17T07:46:16Z DEBUG codelldb::debug_session] Debug event: 0x7f725951ab10 Event: broadcaster = 0x7f7270001560 (lldb.target), type = 0x00000002 (modules-loaded), data = {libXdmcp.so.6} [2020-01-17T07:46:16Z DEBUG codelldb::wire_protocol] <-- {"type":"event","seq":53,"event":"module","body":{"module":{"addressRange":"7F79CBF18000","id":"7F79CBF18000","name":"libXau.so.6","path":"/lib/x86_64-linux-gnu/libXau.so.6","symbolFilePath":"/lib/x86_64-linux-gnu/libXau.so.6","symbolStatus":"Symbols loaded."},"reason":"new"}} [2020-01-17T07:46:16Z DEBUG codelldb::debug_session] Debug event: 0x7f725951f850 Event: broadcaster = 0x7f7270001560 (lldb.target), type = 0x00000002 (modules-loaded), data = {libplc4.so} [2020-01-17T07:46:16Z DEBUG codelldb::wire_protocol] <-- {"type":"event","seq":54,"event":"module","body":{"module":{"addressRange":"7F79CBF10000","id":"7F79CBF10000","name":"libXdmcp.so.6","path":"/lib/x86_64-linux-gnu/libXdmcp.so.6","symbolFilePath":"/lib/x86_64-linux-gnu/libXdmcp.so.6","symbolStatus":"Symbols loaded."},"reason":"new"}} [2020-01-17T07:46:16Z DEBUG codelldb::debug_session] Debug event: 0x7f725952a270 Event: broadcaster = 0x7f7270001560 (lldb.target), type = 0x00000002 (modules-loaded), data = {libplds4.so} [2020-01-17T07:46:16Z DEBUG codelldb::wire_protocol] <-- {"type":"event","seq":55,"event":"module","body":{"module":{"addressRange":"7F79CBF09000","id":"7F79CBF09000","name":"libplc4.so","path":"/lib/x86_64-linux-gnu/libplc4.so","symbolFilePath":"/lib/x86_64-linux-gnu/libplc4.so","symbolStatus":"Symbols loaded."},"reason":"new"}} [2020-01-17T07:46:16Z DEBUG codelldb::debug_session] Debug event: 0x7f725952f360 Event: broadcaster = 0x7f7270001560 (lldb.target), type = 0x00000002 (modules-loaded), data = {libcairo-gobject.so.2} [2020-01-17T07:46:16Z DEBUG codelldb::wire_protocol] <-- {"type":"event","seq":56,"event":"module","body":{"module":{"addressRange":"7F79CBF04000","id":"7F79CBF04000","name":"libplds4.so","path":"/lib/x86_64-linux-gnu/libplds4.so","symbolFilePath":"/lib/x86_64-linux-gnu/libplds4.so","symbolStatus":"Symbols loaded."},"reason":"new"}} [2020-01-17T07:46:16Z DEBUG codelldb::debug_session] Debug event: 0x7f7259535fe0 Event: broadcaster = 0x7f7270001560 (lldb.target), type = 0x00000002 (modules-loaded), data = {libepoxy.so.0} [2020-01-17T07:46:16Z DEBUG codelldb::wire_protocol] <-- {"type":"event","seq":57,"event":"module","body":{"module":{"addressRange":"7F79CBEF8000","id":"7F79CBEF8000","name":"libcairo-gobject.so.2","path":"/lib/x86_64-linux-gnu/libcairo-gobject.so.2","symbolFilePath":"/lib/x86_64-linux-gnu/libcairo-gobject.so.2","symbolStatus":"Symbols loaded."},"reason":"new"}} [2020-01-17T07:46:16Z DEBUG codelldb::debug_session] Debug event: 0x7f7258b55be0 Event: broadcaster = 0x7f7270001560 (lldb.target), type = 0x00000002 (modules-loaded), data = {libfribidi.so.0} [2020-01-17T07:46:16Z DEBUG codelldb::wire_protocol] <-- {"type":"event","seq":58,"event":"module","body":{"module":{"addressRange":"7F79CBDC4000","id":"7F79CBDC4000","name":"libepoxy.so.0","path":"/lib/x86_64-linux-gnu/libepoxy.so.0","symbolFilePath":"/lib/x86_64-linux-gnu/libepoxy.so.0","symbolStatus":"Symbols loaded."},"reason":"new"}} [2020-01-17T07:46:16Z DEBUG codelldb::debug_session] Debug event: 0x7f7258bb4d20 Event: broadcaster = 0x7f7270001560 (lldb.target), type = 0x00000002 (modules-loaded), data = {libharfbuzz.so.0} [2020-01-17T07:46:16Z DEBUG codelldb::wire_protocol] <-- {"type":"event","seq":59,"event":"module","body":{"module":{"addressRange":"7F79CBDA7000","id":"7F79CBDA7000","name":"libfribidi.so.0","path":"/lib/x86_64-linux-gnu/libfribidi.so.0","symbolFilePath":"/lib/x86_64-linux-gnu/libfribidi.so.0","symbolStatus":"Symbols loaded."},"reason":"new"}} [2020-01-17T07:46:16Z DEBUG codelldb::debug_session] Debug event: 0x7f7258b87ed0 Event: broadcaster = 0x7f7270001560 (lldb.target), type = 0x00000002 (modules-loaded), data = {libpangoft2-1.0.so.0} [2020-01-17T07:46:16Z DEBUG codelldb::wire_protocol] <-- {"type":"event","seq":60,"event":"module","body":{"module":{"addressRange":"7F79CBCB0000","id":"7F79CBCB0000","name":"libharfbuzz.so.0","path":"/lib/x86_64-linux-gnu/libharfbuzz.so.0","symbolFilePath":"/lib/x86_64-linux-gnu/libharfbuzz.so.0","symbolStatus":"Symbols loaded."},"reason":"new"}} [2020-01-17T07:46:16Z DEBUG codelldb::debug_session] Debug event: 0x7f7258bafe10 Event: broadcaster = 0x7f7270001560 (lldb.target), type = 0x00000002 (modules-loaded), data = {libfontconfig.so.1} [2020-01-17T07:46:16Z DEBUG codelldb::wire_protocol] <-- {"type":"event","seq":61,"event":"module","body":{"module":{"addressRange":"7F79CBC96000","id":"7F79CBC96000","name":"libpangoft2-1.0.so.0","path":"/lib/x86_64-linux-gnu/libpangoft2-1.0.so.0","symbolFilePath":"/lib/x86_64-linux-gnu/libpangoft2-1.0.so.0","symbolStatus":"Symbols loaded."},"reason":"new"}} [2020-01-17T07:46:16Z DEBUG codelldb::debug_session] Debug event: 0x7f7258b1ccd0 Event: broadcaster = 0x7f7270001560 (lldb.target), type = 0x00000002 (modules-loaded), data = {libfreetype.so.6} [2020-01-17T07:46:16Z DEBUG codelldb::wire_protocol] <-- {"type":"event","seq":62,"event":"module","body":{"module":{"addressRange":"7F79CBC50000","id":"7F79CBC50000","name":"libfontconfig.so.1","path":"/lib/x86_64-linux-gnu/libfontconfig.so.1","symbolFilePath":"/lib/x86_64-linux-gnu/libfontconfig.so.1","symbolStatus":"Symbols loaded."},"reason":"new"}} [2020-01-17T07:46:16Z DEBUG codelldb::debug_session] Debug event: 0x7f7258b4a1a0 Event: broadcaster = 0x7f7270001560 (lldb.target), type = 0x00000002 (modules-loaded), data = {libXinerama.so.1} [2020-01-17T07:46:16Z DEBUG codelldb::wire_protocol] <-- {"type":"event","seq":63,"event":"module","body":{"module":{"addressRange":"7F79CBB95000","id":"7F79CBB95000","name":"libfreetype.so.6","path":"/lib/x86_64-linux-gnu/libfreetype.so.6","symbolFilePath":"/lib/x86_64-linux-gnu/libfreetype.so.6","symbolStatus":"Symbols loaded."},"reason":"new"}} [2020-01-17T07:46:16Z DEBUG codelldb::debug_session] Debug event: 0x7f7258929660 Event: broadcaster = 0x7f7270001560 (lldb.target), type = 0x00000002 (modules-loaded), data = {libxkbcommon.so.0} [2020-01-17T07:46:16Z DEBUG codelldb::wire_protocol] <-- {"type":"event","seq":64,"event":"module","body":{"module":{"addressRange":"7F79CBB8E000","id":"7F79CBB8E000","name":"libXinerama.so.1","path":"/lib/x86_64-linux-gnu/libXinerama.so.1","symbolFilePath":"/lib/x86_64-linux-gnu/libXinerama.so.1","symbolStatus":"Symbols loaded."},"reason":"new"}} [2020-01-17T07:46:16Z DEBUG codelldb::debug_session] Debug event: 0x7f7258bae480 Event: broadcaster = 0x7f7270001560 (lldb.target), type = 0x00000002 (modules-loaded), data = {libwayland-cursor.so.0} [2020-01-17T07:46:16Z DEBUG codelldb::wire_protocol] <-- {"type":"event","seq":65,"event":"module","body":{"module":{"addressRange":"7F79CBB4D000","id":"7F79CBB4D000","name":"libxkbcommon.so.0","path":"/lib/x86_64-linux-gnu/libxkbcommon.so.0","symbolFilePath":"/lib/x86_64-linux-gnu/libxkbcommon.so.0","symbolStatus":"Symbols loaded."},"reason":"new"}} [2020-01-17T07:46:16Z DEBUG codelldb::debug_session] Debug event: 0x7f7258c0c350 Event: broadcaster = 0x7f7270001560 (lldb.target), type = 0x00000002 (modules-loaded), data = {libwayland-egl.so.1} [2020-01-17T07:46:16Z DEBUG codelldb::wire_protocol] <-- {"type":"event","seq":66,"event":"module","body":{"module":{"addressRange":"7F79CBB44000","id":"7F79CBB44000","name":"libwayland-cursor.so.0","path":"/lib/x86_64-linux-gnu/libwayland-cursor.so.0","symbolFilePath":"/lib/x86_64-linux-gnu/libwayland-cursor.so.0","symbolStatus":"Symbols loaded."},"reason":"new"}} [2020-01-17T07:46:16Z DEBUG codelldb::debug_session] Debug event: 0x7f7258c14d80 Event: broadcaster = 0x7f7270001560 (lldb.target), type = 0x00000002 (modules-loaded), data = {libwayland-client.so.0} [2020-01-17T07:46:16Z DEBUG codelldb::wire_protocol] <-- {"type":"event","seq":67,"event":"module","body":{"module":{"addressRange":"7F79CBB3F000","id":"7F79CBB3F000","name":"libwayland-egl.so.1","path":"/lib/x86_64-linux-gnu/libwayland-egl.so.1","symbolFilePath":"/lib/x86_64-linux-gnu/libwayland-egl.so.1","symbolStatus":"Symbols loaded."},"reason":"new"}} [2020-01-17T07:46:16Z DEBUG codelldb::debug_session] Debug event: 0x7f7259691c60 Event: broadcaster = 0x7f7270001560 (lldb.target), type = 0x00000002 (modules-loaded), data = {libthai.so.0} [2020-01-17T07:46:16Z DEBUG codelldb::wire_protocol] <-- {"type":"event","seq":68,"event":"module","body":{"module":{"addressRange":"7F79CBB2E000","id":"7F79CBB2E000","name":"libwayland-client.so.0","path":"/lib/x86_64-linux-gnu/libwayland-client.so.0","symbolFilePath":"/lib/x86_64-linux-gnu/libwayland-client.so.0","symbolStatus":"Symbols loaded."},"reason":"new"}} [2020-01-17T07:46:16Z DEBUG codelldb::debug_session] Debug event: 0x7f7259874ee0 Event: broadcaster = 0x7f7270001560 (lldb.target), type = 0x00000002 (modules-loaded), data = {libpixman-1.so.0} [2020-01-17T07:46:16Z DEBUG codelldb::wire_protocol] <-- {"type":"event","seq":69,"event":"module","body":{"module":{"addressRange":"7F79CBB21000","id":"7F79CBB21000","name":"libthai.so.0","path":"/lib/x86_64-linux-gnu/libthai.so.0","symbolFilePath":"/lib/x86_64-linux-gnu/libthai.so.0","symbolStatus":"Symbols loaded."},"reason":"new"}} [2020-01-17T07:46:16Z DEBUG codelldb::debug_session] Debug event: 0x7f7258bbbdf0 Event: broadcaster = 0x7f7270001560 (lldb.target), type = 0x00000002 (modules-loaded), data = {libpng16.so.16} [2020-01-17T07:46:16Z DEBUG codelldb::wire_protocol] <-- {"type":"event","seq":70,"event":"module","body":{"module":{"addressRange":"7F79CBA7A000","id":"7F79CBA7A000","name":"libpixman-1.so.0","path":"/lib/x86_64-linux-gnu/libpixman-1.so.0","symbolFilePath":"/lib/x86_64-linux-gnu/libpixman-1.so.0","symbolStatus":"Symbols loaded."},"reason":"new"}} [2020-01-17T07:46:16Z DEBUG codelldb::debug_session] Debug event: 0x7f7259909d30 Event: broadcaster = 0x7f7270001560 (lldb.target), type = 0x00000002 (modules-loaded), data = {libxcb-shm.so.0} [2020-01-17T07:46:16Z DEBUG codelldb::wire_protocol] <-- {"type":"event","seq":71,"event":"module","body":{"module":{"addressRange":"7F79CBA42000","id":"7F79CBA42000","name":"libpng16.so.16","path":"/lib/x86_64-linux-gnu/libpng16.so.16","symbolFilePath":"/lib/x86_64-linux-gnu/libpng16.so.16","symbolStatus":"Symbols loaded."},"reason":"new"}} [2020-01-17T07:46:16Z DEBUG codelldb::debug_session] Debug event: 0x7f7258b9dcf0 Event: broadcaster = 0x7f7270001560 (lldb.target), type = 0x00000002 (modules-loaded), data = {libxcb-render.so.0} [2020-01-17T07:46:16Z DEBUG codelldb::wire_protocol] <-- {"type":"event","seq":72,"event":"module","body":{"module":{"addressRange":"7F79CBA3D000","id":"7F79CBA3D000","name":"libxcb-shm.so.0","path":"/lib/x86_64-linux-gnu/libxcb-shm.so.0","symbolFilePath":"/lib/x86_64-linux-gnu/libxcb-shm.so.0","symbolStatus":"Symbols loaded."},"reason":"new"}} [2020-01-17T07:46:16Z DEBUG codelldb::debug_session] Debug event: 0x7f7258b19580 Event: broadcaster = 0x7f7270001560 (lldb.target), type = 0x00000002 (modules-loaded), data = {libsystemd.so.0} [2020-01-17T07:46:16Z DEBUG codelldb::wire_protocol] <-- {"type":"event","seq":73,"event":"module","body":{"module":{"addressRange":"7F79CBA2E000","id":"7F79CBA2E000","name":"libxcb-render.so.0","path":"/lib/x86_64-linux-gnu/libxcb-render.so.0","symbolFilePath":"/lib/x86_64-linux-gnu/libxcb-render.so.0","symbolStatus":"Symbols loaded."},"reason":"new"}} [2020-01-17T07:46:16Z DEBUG codelldb::debug_session] Debug event: 0x7f7258c2ee50 Event: broadcaster = 0x7f7270001560 (lldb.target), type = 0x00000002 (modules-loaded), data = {libgssapi_krb5.so.2} [2020-01-17T07:46:16Z DEBUG codelldb::wire_protocol] <-- {"type":"event","seq":74,"event":"module","body":{"module":{"addressRange":"7F79CB984000","id":"7F79CB984000","name":"libsystemd.so.0","path":"/lib/x86_64-linux-gnu/libsystemd.so.0","symbolFilePath":"/lib/x86_64-linux-gnu/libsystemd.so.0","symbolStatus":"Symbols loaded."},"reason":"new"}} [2020-01-17T07:46:16Z DEBUG codelldb::debug_session] Debug event: 0x7f7258c6af70 Event: broadcaster = 0x7f7270001560 (lldb.target), type = 0x00000002 (modules-loaded), data = {libgnutls.so.30} [2020-01-17T07:46:16Z DEBUG codelldb::wire_protocol] <-- {"type":"event","seq":75,"event":"module","body":{"module":{"addressRange":"7F79CB937000","id":"7F79CB937000","name":"libgssapi_krb5.so.2","path":"/lib/x86_64-linux-gnu/libgssapi_krb5.so.2","symbolFilePath":"/lib/x86_64-linux-gnu/libgssapi_krb5.so.2","symbolStatus":"Symbols loaded."},"reason":"new"}} [2020-01-17T07:46:16Z DEBUG codelldb::debug_session] Debug event: 0x7f7258c20dd0 Event: broadcaster = 0x7f7270001560 (lldb.target), type = 0x00000002 (modules-loaded), data = {libavahi-common.so.3} [2020-01-17T07:46:16Z DEBUG codelldb::wire_protocol] <-- {"type":"event","seq":76,"event":"module","body":{"module":{"addressRange":"7F79CB778000","id":"7F79CB778000","name":"libgnutls.so.30","path":"/lib/x86_64-linux-gnu/libgnutls.so.30","symbolFilePath":"/lib/x86_64-linux-gnu/libgnutls.so.30","symbolStatus":"Symbols loaded."},"reason":"new"}} [2020-01-17T07:46:16Z DEBUG codelldb::debug_session] Debug event: 0x7f72580750e0 Event: broadcaster = 0x7f7270001560 (lldb.target), type = 0x00000002 (modules-loaded), data = {libavahi-client.so.3} [2020-01-17T07:46:16Z DEBUG codelldb::wire_protocol] <-- {"type":"event","seq":77,"event":"module","body":{"module":{"addressRange":"7F79CB76A000","id":"7F79CB76A000","name":"libavahi-common.so.3","path":"/lib/x86_64-linux-gnu/libavahi-common.so.3","symbolFilePath":"/lib/x86_64-linux-gnu/libavahi-common.so.3","symbolStatus":"Symbols loaded."},"reason":"new"}} [2020-01-17T07:46:16Z DEBUG codelldb::wire_protocol] <-- {"type":"event","seq":78,"event":"module","body":{"module":{"addressRange":"7F79CB757000","id":"7F79CB757000","name":"libavahi-client.so.3","path":"/lib/x86_64-linux-gnu/libavahi-client.so.3","symbolFilePath":"/lib/x86_64-linux-gnu/libavahi-client.so.3","symbolStatus":"Symbols loaded."},"reason":"new"}} [2020-01-17T07:46:16Z DEBUG codelldb::debug_session] Debug event: 0x7f7258c39780 Event: broadcaster = 0x7f7270001560 (lldb.target), type = 0x00000002 (modules-loaded), data = {libblkid.so.1} [2020-01-17T07:46:16Z DEBUG codelldb::debug_session] Debug event: 0x7f72584261f0 Event: broadcaster = 0x7f7270001560 (lldb.target), type = 0x00000002 (modules-loaded), data = {libpcre2-8.so.0} [2020-01-17T07:46:16Z DEBUG codelldb::wire_protocol] <-- {"type":"event","seq":79,"event":"module","body":{"module":{"addressRange":"7F79CB700000","id":"7F79CB700000","name":"libblkid.so.1","path":"/lib/x86_64-linux-gnu/libblkid.so.1","symbolFilePath":"/lib/x86_64-linux-gnu/libblkid.so.1","symbolStatus":"Symbols loaded."},"reason":"new"}} [2020-01-17T07:46:16Z DEBUG codelldb::debug_session] Debug event: 0x7f7259aa2ee0 Event: broadcaster = 0x7f7270001560 (lldb.target), type = 0x00000002 (modules-loaded), data = {libbsd.so.0} [2020-01-17T07:46:16Z DEBUG codelldb::wire_protocol] <-- {"type":"event","seq":80,"event":"module","body":{"module":{"addressRange":"7F79CB679000","id":"7F79CB679000","name":"libpcre2-8.so.0","path":"/lib/x86_64-linux-gnu/libpcre2-8.so.0","symbolFilePath":"/lib/x86_64-linux-gnu/libpcre2-8.so.0","symbolStatus":"Symbols loaded."},"reason":"new"}} [2020-01-17T07:46:16Z DEBUG codelldb::debug_session] Debug event: 0x7f72584222a0 Event: broadcaster = 0x7f7270001560 (lldb.target), type = 0x00000002 (modules-loaded), data = {libgraphite2.so.3} [2020-01-17T07:46:16Z DEBUG codelldb::wire_protocol] <-- {"type":"event","seq":81,"event":"module","body":{"module":{"addressRange":"7F79CB65F000","id":"7F79CB65F000","name":"libbsd.so.0","path":"/lib/x86_64-linux-gnu/libbsd.so.0","symbolFilePath":"/lib/x86_64-linux-gnu/libbsd.so.0","symbolStatus":"Symbols loaded."},"reason":"new"}} [2020-01-17T07:46:16Z DEBUG codelldb::debug_session] Debug event: 0x7f7258d0b820 Event: broadcaster = 0x7f7270001560 (lldb.target), type = 0x00000002 (modules-loaded), data = {libdatrie.so.1} [2020-01-17T07:46:16Z DEBUG codelldb::wire_protocol] <-- {"type":"event","seq":82,"event":"module","body":{"module":{"addressRange":"7F79CB632000","id":"7F79CB632000","name":"libgraphite2.so.3","path":"/lib/x86_64-linux-gnu/libgraphite2.so.3","symbolFilePath":"/lib/x86_64-linux-gnu/libgraphite2.so.3","symbolStatus":"Symbols loaded."},"reason":"new"}} [2020-01-17T07:46:16Z DEBUG codelldb::debug_session] Debug event: 0x7f7258c29760 Event: broadcaster = 0x7f7270001560 (lldb.target), type = 0x00000002 (modules-loaded), data = {liblzma.so.5} [2020-01-17T07:46:16Z DEBUG codelldb::wire_protocol] <-- {"type":"event","seq":83,"event":"module","body":{"module":{"addressRange":"7F79CB628000","id":"7F79CB628000","name":"libdatrie.so.1","path":"/lib/x86_64-linux-gnu/libdatrie.so.1","symbolFilePath":"/lib/x86_64-linux-gnu/libdatrie.so.1","symbolStatus":"Symbols loaded."},"reason":"new"}} [2020-01-17T07:46:16Z DEBUG codelldb::debug_session] Debug event: 0x7f7258b83e70 Event: broadcaster = 0x7f7270001560 (lldb.target), type = 0x00000002 (modules-loaded), data = {liblz4.so.1} [2020-01-17T07:46:16Z DEBUG codelldb::wire_protocol] <-- {"type":"event","seq":84,"event":"module","body":{"module":{"addressRange":"7F79CB601000","id":"7F79CB601000","name":"liblzma.so.5","path":"/lib/x86_64-linux-gnu/liblzma.so.5","symbolFilePath":"/lib/x86_64-linux-gnu/liblzma.so.5","symbolStatus":"Symbols loaded."},"reason":"new"}} [2020-01-17T07:46:16Z DEBUG codelldb::debug_session] Debug event: 0x7f7259b955f0 Event: broadcaster = 0x7f7270001560 (lldb.target), type = 0x00000002 (modules-loaded), data = {libgcrypt.so.20} [2020-01-17T07:46:16Z DEBUG codelldb::wire_protocol] <-- {"type":"event","seq":85,"event":"module","body":{"module":{"addressRange":"7F79CB5DF000","id":"7F79CB5DF000","name":"liblz4.so.1","path":"/lib/x86_64-linux-gnu/liblz4.so.1","symbolFilePath":"/lib/x86_64-linux-gnu/liblz4.so.1","symbolStatus":"Symbols loaded."},"reason":"new"}} [2020-01-17T07:46:16Z DEBUG codelldb::debug_session] Debug event: 0x7f7259beb5f0 Event: broadcaster = 0x7f7270001560 (lldb.target), type = 0x00000002 (modules-loaded), data = {libkrb5.so.3} [2020-01-17T07:46:16Z DEBUG codelldb::wire_protocol] <-- {"type":"event","seq":86,"event":"module","body":{"module":{"addressRange":"7F79CB4C1000","id":"7F79CB4C1000","name":"libgcrypt.so.20","path":"/lib/x86_64-linux-gnu/libgcrypt.so.20","symbolFilePath":"/lib/x86_64-linux-gnu/libgcrypt.so.20","symbolStatus":"Symbols loaded."},"reason":"new"}} [2020-01-17T07:46:16Z DEBUG codelldb::debug_session] Debug event: 0x7f72599beb20 Event: broadcaster = 0x7f7270001560 (lldb.target), type = 0x00000002 (modules-loaded), data = {libk5crypto.so.3} [2020-01-17T07:46:16Z DEBUG codelldb::wire_protocol] <-- {"type":"event","seq":87,"event":"module","body":{"module":{"addressRange":"7F79CB3E4000","id":"7F79CB3E4000","name":"libkrb5.so.3","path":"/lib/x86_64-linux-gnu/libkrb5.so.3","symbolFilePath":"/lib/x86_64-linux-gnu/libkrb5.so.3","symbolStatus":"Symbols loaded."},"reason":"new"}} [2020-01-17T07:46:16Z DEBUG codelldb::debug_session] Debug event: 0x7f7259bf8e80 Event: broadcaster = 0x7f7270001560 (lldb.target), type = 0x00000002 (modules-loaded), data = {libcom_err.so.2} [2020-01-17T07:46:16Z DEBUG codelldb::wire_protocol] <-- {"type":"event","seq":88,"event":"module","body":{"module":{"addressRange":"7F79CB3B3000","id":"7F79CB3B3000","name":"libk5crypto.so.3","path":"/lib/x86_64-linux-gnu/libk5crypto.so.3","symbolFilePath":"/lib/x86_64-linux-gnu/libk5crypto.so.3","symbolStatus":"Symbols loaded."},"reason":"new"}} [2020-01-17T07:46:16Z DEBUG codelldb::debug_session] Debug event: 0x7f7259c7ab30 Event: broadcaster = 0x7f7270001560 (lldb.target), type = 0x00000002 (modules-loaded), data = {libkrb5support.so.0} [2020-01-17T07:46:16Z DEBUG codelldb::wire_protocol] <-- {"type":"event","seq":89,"event":"module","body":{"module":{"addressRange":"7F79CB3AC000","id":"7F79CB3AC000","name":"libcom_err.so.2","path":"/lib/x86_64-linux-gnu/libcom_err.so.2","symbolFilePath":"/lib/x86_64-linux-gnu/libcom_err.so.2","symbolStatus":"Symbols loaded."},"reason":"new"}} [2020-01-17T07:46:16Z DEBUG codelldb::debug_session] Debug event: 0x7f7259c9e7c0 Event: broadcaster = 0x7f7270001560 (lldb.target), type = 0x00000002 (modules-loaded), data = {libp11-kit.so.0} [2020-01-17T07:46:16Z DEBUG codelldb::wire_protocol] <-- {"type":"event","seq":90,"event":"module","body":{"module":{"addressRange":"7F79CB39D000","id":"7F79CB39D000","name":"libkrb5support.so.0","path":"/lib/x86_64-linux-gnu/libkrb5support.so.0","symbolFilePath":"/lib/x86_64-linux-gnu/libkrb5support.so.0","symbolStatus":"Symbols loaded."},"reason":"new"}} [2020-01-17T07:46:16Z DEBUG codelldb::debug_session] Debug event: 0x7f7259ca7280 Event: broadcaster = 0x7f7270001560 (lldb.target), type = 0x00000002 (modules-loaded), data = {libidn2.so.0} [2020-01-17T07:46:16Z DEBUG codelldb::wire_protocol] <-- {"type":"event","seq":91,"event":"module","body":{"module":{"addressRange":"7F79CB266000","id":"7F79CB266000","name":"libp11-kit.so.0","path":"/lib/x86_64-linux-gnu/libp11-kit.so.0","symbolFilePath":"/lib/x86_64-linux-gnu/libp11-kit.so.0","symbolStatus":"Symbols loaded."},"reason":"new"}} [2020-01-17T07:46:16Z DEBUG codelldb::debug_session] Debug event: 0x7f7259cad610 Event: broadcaster = 0x7f7270001560 (lldb.target), type = 0x00000002 (modules-loaded), data = {libunistring.so.2} [2020-01-17T07:46:16Z DEBUG codelldb::wire_protocol] <-- {"type":"event","seq":92,"event":"module","body":{"module":{"addressRange":"7F79CB245000","id":"7F79CB245000","name":"libidn2.so.0","path":"/lib/x86_64-linux-gnu/libidn2.so.0","symbolFilePath":"/lib/x86_64-linux-gnu/libidn2.so.0","symbolStatus":"Symbols loaded."},"reason":"new"}} [2020-01-17T07:46:16Z DEBUG codelldb::debug_session] Debug event: 0x7f7259dd8a30 Event: broadcaster = 0x7f7270001560 (lldb.target), type = 0x00000002 (modules-loaded), data = {libtasn1.so.6} [2020-01-17T07:46:16Z DEBUG codelldb::wire_protocol] <-- {"type":"event","seq":93,"event":"module","body":{"module":{"addressRange":"7F79CB0C3000","id":"7F79CB0C3000","name":"libunistring.so.2","path":"/lib/x86_64-linux-gnu/libunistring.so.2","symbolFilePath":"/lib/x86_64-linux-gnu/libunistring.so.2","symbolStatus":"Symbols loaded."},"reason":"new"}} [2020-01-17T07:46:16Z DEBUG codelldb::debug_session] Debug event: 0x7f7259de3950 Event: broadcaster = 0x7f7270001560 (lldb.target), type = 0x00000002 (modules-loaded), data = {libnettle.so.6} [2020-01-17T07:46:16Z DEBUG codelldb::wire_protocol] <-- {"type":"event","seq":94,"event":"module","body":{"module":{"addressRange":"7F79CB0AD000","id":"7F79CB0AD000","name":"libtasn1.so.6","path":"/lib/x86_64-linux-gnu/libtasn1.so.6","symbolFilePath":"/lib/x86_64-linux-gnu/libtasn1.so.6","symbolStatus":"Symbols loaded."},"reason":"new"}} [2020-01-17T07:46:16Z DEBUG codelldb::debug_session] Debug event: 0x7f7259dfedc0 Event: broadcaster = 0x7f7270001560 (lldb.target), type = 0x00000002 (modules-loaded), data = {libhogweed.so.4} [2020-01-17T07:46:16Z DEBUG codelldb::wire_protocol] <-- {"type":"event","seq":95,"event":"module","body":{"module":{"addressRange":"7F79CB075000","id":"7F79CB075000","name":"libnettle.so.6","path":"/lib/x86_64-linux-gnu/libnettle.so.6","symbolFilePath":"/lib/x86_64-linux-gnu/libnettle.so.6","symbolStatus":"Symbols loaded."},"reason":"new"}} [2020-01-17T07:46:16Z DEBUG codelldb::debug_session] Debug event: 0x7f7259e0ab50 Event: broadcaster = 0x7f7270001560 (lldb.target), type = 0x00000002 (modules-loaded), data = {libgmp.so.10} [2020-01-17T07:46:16Z DEBUG codelldb::wire_protocol] <-- {"type":"event","seq":96,"event":"module","body":{"module":{"addressRange":"7F79CB03D000","id":"7F79CB03D000","name":"libhogweed.so.4","path":"/lib/x86_64-linux-gnu/libhogweed.so.4","symbolFilePath":"/lib/x86_64-linux-gnu/libhogweed.so.4","symbolStatus":"Symbols loaded."},"reason":"new"}} [2020-01-17T07:46:16Z DEBUG codelldb::debug_session] Debug event: 0x7f7259e2a040 Event: broadcaster = 0x7f7270001560 (lldb.target), type = 0x00000002 (modules-loaded), data = {libgpg-error.so.0} [2020-01-17T07:46:16Z DEBUG codelldb::wire_protocol] <-- {"type":"event","seq":97,"event":"module","body":{"module":{"addressRange":"7F79CAFBA000","id":"7F79CAFBA000","name":"libgmp.so.10","path":"/lib/x86_64-linux-gnu/libgmp.so.10","symbolFilePath":"/lib/x86_64-linux-gnu/libgmp.so.10","symbolStatus":"Symbols loaded."},"reason":"new"}} [2020-01-17T07:46:16Z DEBUG codelldb::debug_session] Debug event: 0x7f7259e4cf00 Event: broadcaster = 0x7f7270001560 (lldb.target), type = 0x00000002 (modules-loaded), data = {libkeyutils.so.1} [2020-01-17T07:46:16Z DEBUG codelldb::wire_protocol] <-- {"type":"event","seq":98,"event":"module","body":{"module":{"addressRange":"7F79CAF97000","id":"7F79CAF97000","name":"libgpg-error.so.0","path":"/lib/x86_64-linux-gnu/libgpg-error.so.0","symbolFilePath":"/lib/x86_64-linux-gnu/libgpg-error.so.0","symbolStatus":"Symbols loaded."},"reason":"new"}} [2020-01-17T07:46:16Z DEBUG codelldb::debug_session] Debug event: 0x7f725b500030 Event: broadcaster = 0x7f7270001560 (lldb.target), type = 0x00000002 (modules-loaded), data = {addon.node} [2020-01-17T07:46:16Z DEBUG codelldb::wire_protocol] <-- {"type":"event","seq":99,"event":"module","body":{"module":{"addressRange":"7F79CAF90000","id":"7F79CAF90000","name":"libkeyutils.so.1","path":"/lib/x86_64-linux-gnu/libkeyutils.so.1","symbolFilePath":"/lib/x86_64-linux-gnu/libkeyutils.so.1","symbolStatus":"Symbols loaded."},"reason":"new"}} [2020-01-17T07:46:16Z DEBUG codelldb::debug_session] Debug event: 0x7f7258c45480 Event: broadcaster = 0x7f7270001560 (lldb.target), type = 0x00000001 (breakpoint-changed), data = {} [2020-01-17T07:46:16Z DEBUG codelldb::wire_protocol] <-- {"type":"event","seq":100,"event":"module","body":{"module":{"addressRange":"7F79B8AC4000","id":"7F79B8AC4000","name":"addon.node","path":"/home/robin/dev//ReproAlex16012020/-ui/node_modules/@/node-corelib/build/linux/x64/electron/75/addon.node","symbolFilePath":"/home/robin/dev//ReproAlex16012020/-ui/node_modules/@/node-corelib/build/linux/x64/electron/75/addon.node","symbolStatus":"Symbols loaded."},"reason":"new"}} [2020-01-17T07:46:16Z DEBUG codelldb::wire_protocol] <-- {"type":"event","seq":101,"event":"breakpoint","body":{"breakpoint":{"id":2,"message":"Locations: 2","verified":true},"reason":"changed"}} [2020-01-17T07:46:16Z DEBUG codelldb::wire_protocol] --> {"command":"threads","type":"request","seq":10} [2020-01-17T07:46:16Z DEBUG codelldb::wire_protocol] <-- {"type":"response","request_seq":10,"success":true,"command":"threads","body":{"threads":[{"id":8265,"name":"1: tid=8265"},{"id":8266,"name":"2: tid=8266"},{"id":8268,"name":"3: tid=8268"},{"id":8269,"name":"4: tid=8269"},{"id":8271,"name":"5: tid=8271"},{"id":8272,"name":"6: tid=8272"},{"id":8273,"name":"7: tid=8273"},{"id":8274,"name":"8: tid=8274"},{"id":8275,"name":"9: tid=8275"},{"id":8276,"name":"10: tid=8276"},{"id":8278,"name":"11: tid=8278"},{"id":8279,"name":"12: tid=8279"},{"id":8288,"name":"13: tid=8288"},{"id":8348,"name":"14: tid=8348"},{"id":8349,"name":"15: tid=8349"},{"id":8350,"name":"16: tid=8350"},{"id":8351,"name":"17: tid=8351"},{"id":8352,"name":"18: tid=8352"},{"id":8353,"name":"19: tid=8353"},{"id":8354,"name":"20: tid=8354"},{"id":8355,"name":"21: tid=8355"},{"id":8356,"name":"22: tid=8356"},{"id":8357,"name":"23: tid=8357"},{"id":8358,"name":"24: tid=8358"},{"id":8359,"name":"25: tid=8359"},{"id":8360,"name":"26: tid=8360"},{"id":8361,"name":"27: tid=8361"},{"id":8362,"name":"28: tid=8362"},{"id":8363,"name":"29: tid=8363"},{"id":8364,"name":"30: tid=8364"},{"id":8365,"name":"31: tid=8365"},{"id":8366,"name":"32: tid=8366"},{"id":8367,"name":"33: tid=8367"},{"id":8368,"name":"34: tid=8368"},{"id":8369,"name":"35: tid=8369"},{"id":8370,"name":"36: tid=8370"},{"id":8371,"name":"37: tid=8371"},{"id":8372,"name":"38: tid=8372"},{"id":8373,"name":"39: tid=8373"},{"id":8374,"name":"40: tid=8374"},{"id":8375,"name":"41: tid=8375"},{"id":8376,"name":"42: tid=8376"},{"id":8377,"name":"43: tid=8377"},{"id":8378,"name":"44: tid=8378"},{"id":8379,"name":"45: tid=8379"},{"id":8381,"name":"46: tid=8381"},{"id":8382,"name":"47: tid=8382"},{"id":8383,"name":"48: tid=8383"},{"id":8384,"name":"49: tid=8384"},{"id":8386,"name":"50: tid=8386"},{"id":8387,"name":"51: tid=8387"}]}} Hmm seeing the same on latest version. I think I had this error twice. One time, downgrading the vscode-lldb plugin helped. The other time, the error was caused by different patch versions of lldb and clang. So we apparently changed nothing and suddenly this error went away and everything started working. We didn’t downgrade/ upgrade. It just stopped happening the next time we tried to debug. Go figure? This was on a fresh install of vscode. Will let you know if I see this error again, but at least this is a little bit of a datapoint. Sent with GitHawk Sometimes it also happens that you get those error messages, wait a few seconds and then everything works fine... As soon as I witness this bug again, I'll send the log files. Looks like the program loads so shared libs, that "module loaded" events overflow the input queue. :unamused: I guess I'll need to bump that up...
2025-04-01T04:35:50.531542
2019-02-27T15:46:22
415191338
{ "authors": [ "epapoutsellis", "jakobsj" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:11812", "repo": "vais-ral/CCPi-Framework", "url": "https://github.com/vais-ral/CCPi-Framework/issues/209" }
gharchive/issue
Implement adaptive PDHG https://www.cs.umd.edu/~tomg/projects/pdhg/ Adaptive step size selecting that does not require knowledge of operator norm. Also check stopping criteria to implement. Also, http://citeseerx.ist.psu.edu/viewdoc/download?doi=<IP_ADDRESS>1.6056&rep=rep1&type=pdf
2025-04-01T04:35:50.544605
2023-08-28T18:28:07
1870240965
{ "authors": [ "mert-kurttutan", "valebes" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:11813", "repo": "valebes/ppl", "url": "https://github.com/valebes/ppl/pull/33" }
gharchive/pull-request
Fix Typo in README.md Just noticed while trying out your project @mert-kurttutan Thanks for your contribution and for trying PPL. Sorry if I bother you, but I wanted to ask what you think about the available documentation? If you have any suggestions, please let us know. I think publishing on crate (and hence providing API documentation from doc.rs) would be great as it provides the most convenient interface for checking out APIs, more convenient than going through source code. Also, rust mdbook type website is also popular among Rust Crates, which can be created using github page and https://github.com/rust-lang/mdBook Thanks for the suggestions. I surely look to publish it on crates.io. For the examples + guide I think is a good suggestion the one to use mdBook