id
stringlengths
4
10
text
stringlengths
4
2.14M
source
stringclasses
2 values
created
timestamp[s]date
2001-05-16 21:05:09
2025-01-01 03:38:30
added
stringdate
2025-04-01 04:05:38
2025-04-01 07:14:06
metadata
dict
462418533
Make not authenticated message customizable PE servers don't authenticate with Minecraft.net so it would be nice if we could change this to say "Not authenticated with xbox live" or something. A hard coded change making Minecraft.net Xbox Live on Bedrock edition would also be a good alternative. this now uses the correct bedrock-specific "server is in online mode" message
gharchive/issue
2019-06-30T15:43:50
2025-04-01T06:46:18.891312
{ "authors": [ "artulloss", "colinrgodsey" ], "repo": "yesdog/Waterdog", "url": "https://github.com/yesdog/Waterdog/issues/32", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1283756747
对文本添加标点符号失败 我用下面方式给文本添加标点符号,运行报错,请问如何解决? 代码: import ppasr from ppasr.utils.text_utils import PunctuationExecutor pun_executor = PunctuationExecutor(model_dir='models/pun_models') result = pun_executor(text) print(result) 报错: E0624 20:27:29.660760 12207 analysis_config.cc:95] Please compile with gpu to EnableGpu() [2022-06-24 20:27:35,527] [ INFO] - Downloading https://bj.bcebos.com/paddlenlp/models/transformers/ernie/vocab.txt and saved to /home/qugang/.paddlenlp/models/ernie-1.0 [2022-06-24 20:27:35,608] [ INFO] - Downloading vocab.txt from https://bj.bcebos.com/paddlenlp/models/transformers/ernie/vocab.txt 100%|██████████| 89.5k/89.5k [00:00<00:00, 462kB/s] None 'NoneType' object has no attribute 'lower' 报的是空指针异常。你的文本呢? 谢谢,我明白问题出在哪里了。
gharchive/issue
2022-06-24T13:29:49
2025-04-01T06:46:18.906374
{ "authors": [ "billqu01", "yeyupiaoling" ], "repo": "yeyupiaoling/PPASR", "url": "https://github.com/yeyupiaoling/PPASR/issues/85", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
153721071
get_outer_frame_variables() fails in Jupyter Notebook Function get_outer_frame_variables() fails in Jupyter Notbeook due to elements in inspect.getouterframes(inspect.currentframe()) are tuple and have no attribute filename. Code related to this bug is here: https://github.com/yhat/pandasql/blob/master/pandasql/sqldf.py#L96 Simple code to trigger the bug (in Jupyter Notebook): import pandasql as psql psql.sqldf("SELECT * FROM df_demo") # assign df_demo to any simple DataFrame +1 seems in python2 inspect.getouterframes return frame as tuple rather than namedtuple (as in python3).
gharchive/issue
2016-05-09T07:52:29
2025-04-01T06:46:18.918169
{ "authors": [ "fantasy86", "haobibo" ], "repo": "yhat/pandasql", "url": "https://github.com/yhat/pandasql/issues/51", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
241194827
Exception Exception: 'NoneType' object has no attribute 'getitem' or Exception: 'NoneType' object is no subscriptable or Exception: a must be non-empty what is the reason of these exceptions? Hello, What version of python du you use ?
gharchive/issue
2017-07-07T08:33:24
2025-04-01T06:46:18.919615
{ "authors": [ "Ahleroy", "LMdeLiangMi" ], "repo": "yhenon/keras-frcnn", "url": "https://github.com/yhenon/keras-frcnn/issues/98", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
1077740350
无法运行 'items' github链接不通 @Jim2g 提示很明显了.
gharchive/issue
2021-12-12T07:43:30
2025-04-01T06:46:18.925728
{ "authors": [ "Jim2g", "xyzzen" ], "repo": "yhy0/github-cve-monitor", "url": "https://github.com/yhy0/github-cve-monitor/issues/32", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
221514247
yii\caching\Cache::getOrSet() has a type requirement for \Closure, but callable would be sufficient At this line: https://github.com/yiisoft/yii2/blob/master/framework/caching/Cache.php#L569 the method signature requires \Closure, however the implementation uses call_user_func(), for which callable is sufficient. This could be checked with is_callable(). Allowing any callable, rather than specifically \Closure, would allow one to use an array callable. This is preferable in model objects, because closures cannot be serialized. I appreciate that serializing closures will be a future addition to Yii, but it is not ready yet. This limitation makes getOrSet() mostly unusable for me. @SilverFire may have an opinion on this as he wrote the code. Fixed, thank you for your suggestiong
gharchive/issue
2017-04-13T09:55:53
2025-04-01T06:46:18.954244
{ "authors": [ "SilverFire", "spikyjt" ], "repo": "yiisoft/yii2", "url": "https://github.com/yiisoft/yii2/issues/13981", "license": "BSD-3-Clause", "license_type": "permissive", "license_source": "github-api" }
44798048
issue on setting email template on console application Hi, i have an issue. I want to send email using console application, with a custom template. This is my code : \Yii::$app->mail->compose('template/template1', ['shipments' => $user->shipmentNotif]) ->setFrom('support@yopmail.com') ->setTo($user->usr_email) ->setSubject('News Letter') ->send(); but, i got this error : The view file does not exist: /Users/neogazebo/public_html/ikargo/console/mail/layouts/html.php why is it still searching the default file template? am i doing it wrong? How i can call the mail layout inside of my app\mail\layouts\html.php tu use in Yii::$app->mailer->compose('html') How can I use several layouts and to specify the layout that I need? Refer -> http://www.yiiframework.com/doc-2.0/guide-tutorial-mailing.html First set htmlLayout using the following in your controller method. Yii::$app->mailer->htmlLayout = "@app/mail/layouts/html_test"; Then use the compose method like the following : Yii::$app->mailer->compose('organization_add') ->setFrom('from@domain.com') ->setTo($model->user_name) ->setSubject('Message subject') ->send(); This code is not tested but you could try it to set your layout. Refer -> http://www.yiiframework.com/doc-2.0/guide-tutorial-mailing.html First set htmlLayout using the following in your controller method. Yii::$app->mailer->htmlLayout = "@app/mail/layouts/not_default_layout"; Then use the compose method like the following : Yii::$app->mailer->compose('organization_add') ->setFrom('from@domain.com') ->setTo($model->user_name) ->setSubject('Message subject') ->send(); This code is not tested but you could try it to set your layout.
gharchive/issue
2014-10-03T12:22:59
2025-04-01T06:46:18.960635
{ "authors": [ "fabiosantos87", "maxttor", "neogazebo", "romitcn", "romits1990" ], "repo": "yiisoft/yii2", "url": "https://github.com/yiisoft/yii2/issues/5332", "license": "BSD-3-Clause", "license_type": "permissive", "license_source": "github-api" }
102354329
Bug in ActiveQuery (select, joinWith, asArray combination fails) Hi there, I found problem with ActiveQuery. I tired query like this: Order::find()->select(['price', 'firstname', 'lastname'])->joinWith('user')->asArray()->all(); Exception: exception 'yii\base\ErrorException' with message 'Undefined index: id' in E:\Web\root-users\proj1\vendor\yiisoft\yii2\db\ActiveQuery.php:263 Stack trace: #0 E:\Web\root-users\proj1\vendor\yiisoft\yii2\db\ActiveQuery.php(263): yii\base\ErrorHandler->handleError(8, 'Undefined index...', 'E:\\Web\\root-use...', 263, Array) #1 E:\Web\root-users\proj1\vendor\yiisoft\yii2\db\ActiveQuery.php(220): yii\db\ActiveQuery->removeDuplicatedModels(Array) #2 E:\Web\root-users\proj1\vendor\yiisoft\yii2\db\Query.php(207): yii\db\ActiveQuery->populate(Array) #3 E:\Web\root-users\proj1\vendor\yiisoft\yii2\db\ActiveQuery.php(130): yii\db\Query->all(NULL) #4 E:\Web\root-users\proj1\backend\controllers\order\ExportCsvAction.php(38): yii\db\ActiveQuery->all() #5 [internal function]: backend\controllers\order\ExportCsvAction->run() #6 E:\Web\root-users\proj1\vendor\yiisoft\yii2\base\Action.php(92): call_user_func_array(Array, Array) #7 E:\Web\root-users\proj1\vendor\yiisoft\yii2\base\Controller.php(151): yii\base\Action->runWithParams(Array) #8 E:\Web\root-users\proj1\vendor\yiisoft\yii2\base\Module.php(455): yii\base\Controller->runAction('export-csv', Array) #9 E:\Web\root-users\proj1\vendor\yiisoft\yii2\web\Application.php(84): yii\base\Module->runAction('order/ex...', Array) #10 E:\Web\root-users\proj1\vendor\yiisoft\yii2\base\Application.php(375): yii\web\Application->handleRequest(Object(yii\web\Request)) #11 E:\Web\root-users\proj1\backend\web\index.php(18): yii\base\Application->run() #12 {main} Version: 2.0.4 It still does not work after update to 2.0.6. Exception: exception 'yii\base\ErrorException' with message 'Undefined index: user_id' in E:\Web\root-users\proj1\vendor\yiisoft\yii2\db\ActiveRelationTrait.php:456 Stack trace: #0 E:\Web\root-users\proj1\vendor\yiisoft\yii2\db\ActiveRelationTrait.php(456): yii\base\ErrorHandler->handleError(8, 'Undefined index...', 'E:\\Web\\root-use...', 456, Array) #1 E:\Web\root-users\proj1\vendor\yiisoft\yii2\db\ActiveRelationTrait.php(215): yii\db\ActiveQuery->filterByModels(Array) #2 E:\Web\root-users\proj1\vendor\yiisoft\yii2\db\ActiveQueryTrait.php(170): yii\db\ActiveQuery->populateRelation('user', '<span class="st...') #3 E:\Web\root-users\proj1\vendor\yiisoft\yii2\db\ActiveQuery.php(225): yii\db\ActiveQuery->findWith(Array, '<span class="st...') #4 E:\Web\root-users\proj1\vendor\yiisoft\yii2\db\Query.php(207): yii\db\ActiveQuery->populate(Array) #5 E:\Web\root-users\proj1\vendor\yiisoft\yii2\db\ActiveQuery.php(132): yii\db\Query->all(NULL) #6 E:\Web\root-users\proj1\backend\controllers\order\ExportCsvAction.php(38): yii\db\ActiveQuery->all() #7 [internal function]: backend\controllers\order\ExportCsvAction->run() #8 E:\Web\root-users\proj1\vendor\yiisoft\yii2\base\Action.php(92): call_user_func_array(Array, Array) #9 E:\Web\root-users\proj1\vendor\yiisoft\yii2\base\Controller.php(151): yii\base\Action->runWithParams(Array) #10 E:\Web\root-users\proj1\vendor\yiisoft\yii2\base\Module.php(455): yii\base\Controller->runAction('export-csv', Array) #11 E:\Web\root-users\proj1\vendor\yiisoft\yii2\web\Application.php(84): yii\base\Module->runAction('order/ex...', Array) #12 E:\Web\root-users\proj1\vendor\yiisoft\yii2\base\Application.php(375): yii\web\Application->handleRequest(Object(yii\web\Request)) #13 E:\Web\root-users\proj1\backend\web\index.php(18): yii\base\Application->run() #14 {main} This pretty clearly indicates an issue in your code. You seem to use user_id somewhere, where it doesn't belong. Probably in the relation definition of user. @mikehaertl code is fine, but code form Yii still have some problems. I know that it can be a black swan for you and no one test it. Expectations of this functionality are: In output is a two dimensional array (no less no more) First dimension is a list of arrays. Second dimension is a dictionary with keys as column names (prefixed if belong to foreign model), and values only matching to select closure. @mklemarczyk I think it would help the core developers very much, if you supply a boiled down code example that helps to reproduce your issue. If you have relations you need to select the related columns. @Alex-Code it should be done automatically. In select you specify only data what should be outputted not selected from DB. It should be done by ORM. The same thing. There is a workaround: $result = Order::find()->select(['price', 'firstname', 'lastname'])->joinWith('user') ->createCommand()->queryAll(); I am also experiencing this as a bug. It should work like MYSQL can have a select a specific column and meanwhile join on different fields. It will only return the fields you selected though. Another example: class RecreationRentalPeriodQuery extends ActiveQuery { /* .. */ /** * @param RecreationObjectType $objectType * @return static */ public function byObjectType(RecreationObjectType $objectType) { return $this->innerJoinWith([ 'rentalType.rentalTypeConnection.objects.objectType' => function($q) use ($objectType) { $modelClass = $q->modelClass; $q->andWhere([$modelClass::tableName().'.type_id' => $objectType->type_id]); }] ); } public function byAmountOfNights($objectType, $amountOfNights) { $allPeriods = RecreationRentalPeriod::find()->cache() ->select([ RecreationRentalPeriod::tableName().'.period_id', RecreationRentalType::tableName().'.rental_id', ]) ->byObjectType($instance)->asArray()->all(); /* .. */ Where basically the only thing I want to do is $allPeriods = RecreationRentalPeriod::find()->cache() ->select('period_id') ->byObjectType($instance)->asArray()->all(); Can one explain why it does need the rental id? It also uses other relations, where it does not require to select the key? I would have expected something more like this (which would be obviously not workable): $allPeriods = RecreationRentalPeriod::find()->cache() ->select([ RecreationRentalType::tableName().'.rental_id', RecreationObjectRentalConnection::tableName().'.rental_id', RecreationObjectRentalConnection::tableName().'.object_id', RecreationObject::tableName().'.object_id', RecreationObject::tableName().'.type_id', RecreationObjectType::tableName().'.type_id' @RdeWilde : one issue at a time please. Can you formulate it again in a way that it is easy to grasp for people looking at it for the first time? @dynasource It is one and the same issue I think? Please explain what I need to clarify. I can post a gist if that helps. @RdeWilde, its undoable to help you debugging. To sum up: an example with 'rentalType.rentalTypeConnection.objects.objectType' an example about 'byAmountOfNights' an example about 'byObjectType' an reference to 'inverseOf' You really have to split your issue in the smallest contained environment for us to understand it properly without wasting too much time. Perhabs its better for you to open your own issue. @dynasource It is in the gist. These are snippets out of thousands of lines, I am not allowed to post all that. Also, it was basically post as another example use case for the issue. I don't think my particular code matters that much to the issue. I think the issue has already been made clear by @mklemarczyk in https://github.com/yiisoft/yii2/issues/9495#issuecomment-133378872 your gist was enough indeed. It took quite a dive into core code to understand the problem and it does seem a bug. When linking parameters are not loaded into a model at first hand, it would never be possible to access this model attribute at this line: https://github.com/yiisoft/yii2/blob/master/framework/db/ActiveRelationTrait.php#L458
gharchive/issue
2015-08-21T11:11:16
2025-04-01T06:46:18.973204
{ "authors": [ "Alex-Code", "RdeWilde", "dynasource", "mikehaertl", "mklemarczyk", "ptz-nerf" ], "repo": "yiisoft/yii2", "url": "https://github.com/yiisoft/yii2/issues/9495", "license": "BSD-3-Clause", "license_type": "permissive", "license_source": "github-api" }
154228967
docs-ja updated [ci skip] Document translations updated [ja] uioo Thank you, @softark!
gharchive/pull-request
2016-05-11T12:15:31
2025-04-01T06:46:18.974752
{ "authors": [ "SilverFire", "softark", "z-n" ], "repo": "yiisoft/yii2", "url": "https://github.com/yiisoft/yii2/pull/11545", "license": "BSD-3-Clause", "license_type": "permissive", "license_source": "github-api" }
172365080
Use comma instead of semicolon Merged. Thanks!
gharchive/pull-request
2016-08-22T03:54:04
2025-04-01T06:46:18.975387
{ "authors": [ "Mak-Di", "samdark" ], "repo": "yiisoft/yii2", "url": "https://github.com/yiisoft/yii2/pull/12242", "license": "BSD-3-Clause", "license_type": "permissive", "license_source": "github-api" }
299148571
Fix #15656: Error in the documentation of widget theming Q A Is bugfix? no New feature? no Breaks BC? no Tests pass? no Fixed issues #15656 [ci skip] Merged. Thank you!
gharchive/pull-request
2018-02-21T22:12:25
2025-04-01T06:46:18.977834
{ "authors": [ "nvlad", "samdark" ], "repo": "yiisoft/yii2", "url": "https://github.com/yiisoft/yii2/pull/15734", "license": "BSD-3-Clause", "license_type": "permissive", "license_source": "github-api" }
224468383
Error: There is already a steps folder Perhaps my workflow isn't quite right, but I'm wondering if the leg undiff behaviour could be made more user friendly when the steps folder already exists. As I'm lazy, I tried to reduce the commands I execute after making any kind of change to just a one-liner; leg diff && leg undiff && leg doc. As-is, my project seems to be small enough for it not to matter that I'm rebuilding everything from scratch (maybe this will change?). However, the leg undiff step is failing with Error: There is already a steps folder. It's pretty trivial, but could undiff be changed to either (a) delete and rewrite the steps folder, or (b) update the contents. Right now I'm just including a rm -r steps command too, which gets the job done, but maybe there's a better solution! Yes, the way the commands work right now isn't very ideal. It's the result of me originally using step folders to work on the kilo tutorial, and then gradually starting to use git more and more as I got more comfortable with git rebase. I don't use the steps/ representation to make changes to steps anymore, and it sounds like you don't either, but I think some people might still find it useful, like if they aren't really comfortable using git for everything. So that's why leg doesn't want to overwrite the steps/ folder: it might have unsaved changes. One solution that comes to mind is to have a leg sync command that updates all three representations to one of the representations, and leg doc could run the sync operation implicitly. The default representation you want to sync from could be specified in leg.yml. How does that sound? Thanks, yes I think that sounds good. As I mentioned, it's not hugely difficult right now, but a sync command sounds like a good thing (and is probably easier for new users to get their heads around)! Okay, the sync command is implemented in 0.24.0. leg help sync explains how it works. You'll need to add the line :sync: repo to your leg.yml. The leg doc command automatically runs leg sync, so most of the time leg doc should be the only command you'll need. Great, thanks. We've tried this and it's simplified things a lot. Thank you! Richard
gharchive/issue
2017-04-26T13:29:04
2025-04-01T06:46:18.991271
{ "authors": [ "rnorth", "yjerem" ], "repo": "yjerem/leg", "url": "https://github.com/yjerem/leg/issues/2", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
1838576426
about model and code Thank you for your outstanding work. I am also very interested in this paper and the diffusion model. I wonder when the source code and the specific training model can be made public. 3q It will be available by the end of this month. Thanks for your interest and patience.
gharchive/issue
2023-08-07T02:44:56
2025-04-01T06:46:19.002120
{ "authors": [ "yl4579", "yyh565655555" ], "repo": "yl4579/StyleTTS2", "url": "https://github.com/yl4579/StyleTTS2/issues/3", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2011167372
h100: Worse output & 20x slower inference? We're testing finetuning on an h100 and 4090, here are the results: 4090: https://voca.ro/11mtxzLHzzih h100: https://voca.ro/15QldVjuG7nu Almost identical finetune, but h100 is output is SIGNIFICANTLY worse. It isn't a config issue, and we've replicated it twice with LJSpeech as well. 4090 is also faster during training and considerably faster during inference, almost 20x faster than h100: h100: And during training, one epoch took the 4090 about 3 minutes, while the h100 took 4.12 minutes. Does anyone know what could be going on here? Never seen an issue like this on an h100 before with a diffusion like model. Thanks I'm using on T4 2 GPU kaggle its working good check you requirement.txt version Can you be sure they’re on the same machine with the same library and everything, except the GPU difference? @addytheyoung and me are working together on this We are running this on 2 different machines, one is a lambdalabs h100 and one is a runpod rtx 4090 The env is the same with both running in a python3.10 virtualenv with the latest repo and dependencies In the screenshots shared earlier there was an issue with non http 200 responses leading to invalid timing However, rerunning the following simple web api on both machines, and the 4090 seems to be 2x as fast as it should be 3x slower than the h100 4090 ~12s @ 100 concurrent requests h100 ~24s @ 100 concurrent requests @devidw Have you tried to match the CUDA driver version as well? Have you tried the docker version? @devidw This is so weird, I've tested the code on A40, A100 and L40, all good with CUDA 11.8. I'm not sure if it is the CUDA driver version that causes this. I don't have H100 so I can't test it. The worst case scenario is you compare the output for each module with the same input and see which module causes the issue. @yl4579 I believe we've done that already, oh well. If anyone else has or can test on an H100, we'd love to hear what's going on. This is a big deal for scaling inference. Did you find any output difference or timed each line and find the module that causes the bottleneck? @addytheyoung I think it’s better to check the output from each module and setting a static random seed (don’t use diffusion model) to check which module produces different results, so we can debug from there. Odd, I've been training a model on a h100 for a day or so, and haven't noticed a detriment in speed or quality, is your repo up to date? Very interesting, I just found this issue by pure chance, and indeed i had the same problem a few weeks ago. I ran 6 fine-tunings in parallel for several days on a single H100, and once the trainings finished I noticed that all models were absolutely useless. I reran the same trainings on 8xA100 and the results were good! The only differences (besides the number & type of GPUs used) was the per GPU batch size, but I didn't think that this would have caused the issue. I was (and still am) extremely puzzled... (by the way, yes I burnt 1-2k$ because of this...) For me, a H100 training don't seem to work either. I tried to resume training of an English model which I'm training from scratch and was started on a 2xA100 80GB. I tried to resume it on a single H100 80GB machine. During the training on 2xA100, my validation loss was about 0.28 but on the H100, it just jumped straight to 0.49 - 0.52 and never got below that. I was using vast.ai H100 machine and haven't tried multi-GPU for the training so far but this H100 issue is indeed very puzzling to me as well.
gharchive/issue
2023-11-26T19:12:08
2025-04-01T06:46:19.012905
{ "authors": [ "AMEERAZAM08", "addytheyoung", "devidw", "korakoe", "martinambrus", "sch0ngut", "yl4579" ], "repo": "yl4579/StyleTTS2", "url": "https://github.com/yl4579/StyleTTS2/issues/89", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2008144508
Add Replicate demo Hi, Thank you for the great work! I implemented a Cog wrapper around StyleTTS 2 and added a demo to Replicate. This pull request makes it possible to run your model inside a Docker environment using [Cog (https://github.com/replicate/cog) and adds a link to the Replicate demo. I'd also be happy to transfer the Replicate demo to you if you create an organization on Replicate. Kind regards, Alara I think the output SR is not correct on replicate. Can you fix it before I merge? @yl4579 is that why the voice cloning fails in the replicate demo? @platform-kit I don't know I'm not familiar with replicate and I'm not the author of the demo either. I can see the problem is this line for output: audio = AudioSegment(audio.data, frame_rate=22050, sample_width=2, channels=1), the sr should be 24000 instead. Hi, just a suggestion - do you want to add a note saying that the Replicate demo has more features but will take a minute to start up (I'm thinking of how it takes a couple minutes to start up from Replicate w/ cold boots)? Thanks!
gharchive/pull-request
2023-11-23T12:35:03
2025-04-01T06:46:19.016479
{ "authors": [ "alaradirik", "fakerybakery", "platform-kit", "yl4579" ], "repo": "yl4579/StyleTTS2", "url": "https://github.com/yl4579/StyleTTS2/pull/71", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1719049933
根据run_clm_sft_with_peft.py跑出来的checkpoint里面没有adapter_config.json和adapter_model.bin 详细描述问题 根据run_clm_sft_with_peft.py以及LLaMa-plus-7b从头训练了一个alpaca模型,但是checkpoint中没有相应的adapter_config.json和adapter_model.bin 直接使用merge_llama_with_chinese_lora.py合并报错 运行截图或日志 python ./scripts/merge_llama_with_chinese_lora.py \ --base_model '/xxx/llama-7b-hf' \ --lora_model '/xxx/chinese-llama-plus-lora-7b', '/xxx/chinese-alpaca-lora-7b' \ --output_type huggingface \ --output_dir '/xxx/chinese-alpaca-7b' 其中chinese-alpaca-lora-7b是我自己训练保存模型的文件夹 然后报错: ValueError: Can't find 'adapter_config.json' at '/xxx/chinese-alpaca-lora-7b' 必查项目(前三项只保留你要问的) [x] 基础模型:Alpaca-Plus [x] 运行系统:Linux [x] 问题分类:模型转换和合并 / 模型训练与精调 [x] (必选)由于相关依赖频繁更新,请确保按照Wiki中的相关步骤执行 [x] (必选)我已阅读FAQ章节并且已在Issue中对问题进行了搜索,没有找到相似问题和解决方案 [ ] (必选)第三方插件问题:例如llama.cpp、text-generation-webui、LlamaChat等,同时建议到对应的项目中查找解决方案 参考 #361
gharchive/issue
2023-05-22T07:13:15
2025-04-01T06:46:19.033506
{ "authors": [ "iMountTai", "liuyukid" ], "repo": "ymcui/Chinese-LLaMA-Alpaca", "url": "https://github.com/ymcui/Chinese-LLaMA-Alpaca/issues/405", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1366345117
Maya: Extractor "Look" fails on maketx utility for image texture conversion (Png>tx) / or missing OCIO.config problem?! Unable to publish asset look because of crash during image texture conversion from PNG to tx format using maketx utility. Instance: lookMain Message: Command 'D:\REPO\OpenPype\vendor\bin\oiio\windows\maketx.exe -v -u --unpremult --checknan --oiio --filter lanczos3 D:\PROJECTS\OP01_CG_demo\assets\props\airConditionerB\work\lookdev\textures\ac_DefaultMaterial_Roughness.1001.png --sattrib sourceHash ac_DefaultMaterial_Roughness,1001,png|1652796456,0|325183|maketx --colorconfig D:\REPO\OpenPype\vendor\bin\ocioconfig\OpenColorIOConfigs\nuke-default\config.ocio -o C:\Users\Libor\AppData\Local\Temp\pyblish_tmp_w4v7_ip1\resources\ac_DefaultMaterial_Roughness.1001.tx' returned non-zero exit status 1. Line: 512 Traceback: Traceback (most recent call last): File "D:\REPO\OpenPype\.venv\lib\site-packages\pyblish\plugin.py", line 522, in __explicit_process runner(*args) File "D:\REPO\OpenPype\openpype\hosts\maya\plugins\publish\extract_look.py", line 252, in process File "D:\REPO\OpenPype\openpype\hosts\maya\plugins\publish\extract_look.py", line 389, in process_resources File "D:\REPO\OpenPype\openpype\hosts\maya\plugins\publish\extract_look.py", line 549, in _process_texture File "D:\REPO\OpenPype\openpype\hosts\maya\plugins\publish\extract_look.py", line 121, in maketx File "C:\Program Files\Autodesk\Maya2022\Python37\lib\subprocess.py", line 411, in check_output **kwargs).stdout File "C:\Program Files\Autodesk\Maya2022\Python37\lib\subprocess.py", line 512, in run output=stdout, stderr=stderr) subprocess.CalledProcessError: Command 'D:\REPO\OpenPype\vendor\bin\oiio\windows\maketx.exe -v -u --unpremult --checknan --oiio --filter lanczos3 D:\PROJECTS\OP01_CG_demo\assets\props\airConditionerB\work\lookdev\textures\ac_DefaultMaterial_Roughness.1001.png --sattrib sourceHash ac_DefaultMaterial_Roughness,1001,png|1652796456,0|325183|maketx --colorconfig D:\REPO\OpenPype\vendor\bin\ocioconfig\OpenColorIOConfigs\nuke-default\config.ocio -o C:\Users\Libor\AppData\Local\Temp\pyblish_tmp_w4v7_ip1\resources\ac_DefaultMaterial_Roughness.1001.tx' returned non-zero exit status 1. [cuID:OP-3944] also attaching image file which caused it... @LiborBatek This would be better to debug if the subprocess output would be logged as well. Could you change this logic to something like this: except subprocess.CalledProcessError as exc: # Log subprocess output self.log.error(exc.output) # Log exception self.log.error(traceback.format_exc()) raise return out And then try again? The subprocess.CalledProcessError.output should be the console output produced by the subprocess in the case of an exception. Likely maketx will have logged more useful information there and can help us pinpoint the issue. @BigRoy thx I invited guys to look on that ...I m just not able to do it myself being just poor cg artist :) This is caused by "missing OCIO config" error in maketx. Strange as it happens only to specific files in specific scenarios (and with OCIO config present). I'll add more details. Could it be that the subprocess paths passed along don't get the backslashes or spaces in paths escaped correctly? So that only some OCIO paths might be invalid? We could try enforcing forward slashes? @LiborBatek @antirotor Would this branch work? Here's a diff without opening a PR directly. Also that vendorized OCIO config being present would only be there if one would have correctly pulled the latest binaries and rebuild environment, etc. along with the recent refactoring instead of only checking out this branch. That could've also been the problem? @LiborBatek is this still happening? @antirotor did test it in latest develop and works normally without any glitch! @antirotor did test it in latest develop and works normally without any glit
gharchive/issue
2022-09-08T13:33:42
2025-04-01T06:46:19.044995
{ "authors": [ "BigRoy", "LiborBatek", "antirotor" ], "repo": "ynput/OpenPype", "url": "https://github.com/ynput/OpenPype/issues/3814", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2644719428
Sync attribute changes from AYON to ftrack Is there an existing issue for this? [x] I have searched the existing issues and added correct labels. Please describe the feature you have in mind and explain what the current shortcomings are? Sync attribute changes from AYON to ftrack Suggested implementation? No response Describe alternatives you've considered: No response Resolved with https://github.com/ynput/ayon-ftrack/pull/172
gharchive/issue
2024-11-08T17:38:09
2025-04-01T06:46:19.047600
{ "authors": [ "iLLiCiTiT" ], "repo": "ynput/ayon-ftrack", "url": "https://github.com/ynput/ayon-ftrack/issues/162", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
202381758
log 4xx and 5xx errors i am thinking something like this as a fix for #49 oh lol bahhhh ofc facepalm On Sun, 22 Jan 2017 at 13:31 Yoshua Wuyts notifications@github.com wrote: @yoshuawuyts requested changes on this pull request. we need more math In index.js https://github.com/yoshuawuyts/merry/pull/56#pullrequestreview-17828629: @@ -80,6 +80,12 @@ Merry.prototype.router = function (opts, routes) { var statusCode = err.output.statusCode || (res.statusCode >= 400 ? res.statusCode : 500) if (statusCode === 400) { this won't work if we hit something like a 401 — You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/yoshuawuyts/merry/pull/56#pullrequestreview-17828629, or mute the thread https://github.com/notifications/unsubscribe-auth/AHu3CL8viWPPk4sZtBm-8p0CQ-eKYc4Cks5rU1oggaJpZM4LqYEw . Did you cut a release for this too btw? On Wed, Feb 1, 2017, 13:54 Irina Shestak notifications@github.com wrote: Merged #56 https://github.com/yoshuawuyts/merry/pull/56. — You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/yoshuawuyts/merry/pull/56#event-945526688, or mute the thread https://github.com/notifications/unsubscribe-auth/ACWlekSoPZo8jodw1jNnDhVpEq-CbGkZks5rYP8NgaJpZM4LqYEw . yeppppp it's the 4.1.4 one ✨ On Thu, 2 Feb 2017 at 10:48 Yoshua Wuyts notifications@github.com wrote: Did you cut a release for this too btw? On Wed, Feb 1, 2017, 13:54 Irina Shestak notifications@github.com wrote: Merged #56 https://github.com/yoshuawuyts/merry/pull/56. — You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/yoshuawuyts/merry/pull/56#event-945526688, or mute the thread < https://github.com/notifications/unsubscribe-auth/ACWlekSoPZo8jodw1jNnDhVpEq-CbGkZks5rYP8NgaJpZM4LqYEw . — You are receiving this because you modified the open/close state. Reply to this email directly, view it on GitHub https://github.com/yoshuawuyts/merry/pull/56#issuecomment-276925299, or mute the thread https://github.com/notifications/unsubscribe-auth/AHu3CD-K6XX7aTpwruecwVxFwoahXWZZks5rYbSNgaJpZM4LqYEw . Swooooosh On Thu, Feb 2, 2017, 02:54 Irina Shestak notifications@github.com wrote: yeppppp it's the [4.1.4]( https://github.com/yoshuawuyts/merry/commit/6826070ffb0ca6d5321202e63c5c658eaa7d9260 ) one ✨ On Thu, 2 Feb 2017 at 10:48 Yoshua Wuyts notifications@github.com wrote: Did you cut a release for this too btw? On Wed, Feb 1, 2017, 13:54 Irina Shestak notifications@github.com wrote: Merged #56 https://github.com/yoshuawuyts/merry/pull/56. — You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/yoshuawuyts/merry/pull/56#event-945526688, or mute the thread < https://github.com/notifications/unsubscribe-auth/ACWlekSoPZo8jodw1jNnDhVpEq-CbGkZks5rYP8NgaJpZM4LqYEw . — You are receiving this because you modified the open/close state. Reply to this email directly, view it on GitHub https://github.com/yoshuawuyts/merry/pull/56#issuecomment-276925299, or mute the thread < https://github.com/notifications/unsubscribe-auth/AHu3CD-K6XX7aTpwruecwVxFwoahXWZZks5rYbSNgaJpZM4LqYEw . — You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/yoshuawuyts/merry/pull/56#issuecomment-276926544, or mute the thread https://github.com/notifications/unsubscribe-auth/ACWlelghnMDRPX8fz6dhmwu0DANhSkXMks5rYbXrgaJpZM4LqYEw . 🏄🌊🌊 🌊🌊🌊 On Thu, Feb 2, 2017, 11:09 Yoshua Wuyts notifications@github.com wrote: Swooooosh On Thu, Feb 2, 2017, 02:54 Irina Shestak notifications@github.com wrote: yeppppp it's the [4.1.4]( https://github.com/yoshuawuyts/merry/commit/6826070ffb0ca6d5321202e63c5c658eaa7d9260 ) one ✨ On Thu, 2 Feb 2017 at 10:48 Yoshua Wuyts notifications@github.com wrote: Did you cut a release for this too btw? On Wed, Feb 1, 2017, 13:54 Irina Shestak notifications@github.com wrote: Merged #56 https://github.com/yoshuawuyts/merry/pull/56. — You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/yoshuawuyts/merry/pull/56#event-945526688, or mute the thread < https://github.com/notifications/unsubscribe-auth/ACWlekSoPZo8jodw1jNnDhVpEq-CbGkZks5rYP8NgaJpZM4LqYEw . — You are receiving this because you modified the open/close state. Reply to this email directly, view it on GitHub https://github.com/yoshuawuyts/merry/pull/56#issuecomment-276925299, or mute the thread < https://github.com/notifications/unsubscribe-auth/AHu3CD-K6XX7aTpwruecwVxFwoahXWZZks5rYbSNgaJpZM4LqYEw . — You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/yoshuawuyts/merry/pull/56#issuecomment-276926544, or mute the thread < https://github.com/notifications/unsubscribe-auth/ACWlelghnMDRPX8fz6dhmwu0DANhSkXMks5rYbXrgaJpZM4LqYEw . — You are receiving this because you modified the open/close state. Reply to this email directly, view it on GitHub https://github.com/yoshuawuyts/merry/pull/56#issuecomment-276929656, or mute the thread https://github.com/notifications/unsubscribe-auth/AHu3CP9EcdGmE0LunlboHk9rY8DhMiqgks5rYbl3gaJpZM4LqYEw .
gharchive/pull-request
2017-01-22T13:06:59
2025-04-01T06:46:19.216635
{ "authors": [ "lrlna", "yoshuawuyts" ], "repo": "yoshuawuyts/merry", "url": "https://github.com/yoshuawuyts/merry/pull/56", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
240431670
Chrome 60 will require blobs to be CORS whitelisted https://developers.google.com/web/updates/2017/06/chrome-60-deprecations Not sure what implications are, but we should at least inform people of this. Also cc/ @lrlna, do we have thoughts on CORS in merry? Do you know how CORS is handled in nginx / CDNs? hey lol sorry i am horrible at emails coming in from github SO! I don't know how to handle it on the nginx level, never done it there. But with merry, you really just set the appropriate headers and methods on the res obj, kind of like i did here. <-- this was done for a previous version of merry that expected a middleware thing, but i can tweek it for this version of merry. Yeah, good one - we should probs also create one for https://www.npmjs.com/package/helmet
gharchive/issue
2017-07-04T14:08:36
2025-04-01T06:46:19.219815
{ "authors": [ "lrlna", "yoshuawuyts" ], "repo": "yoshuawuyts/nanobeacon", "url": "https://github.com/yoshuawuyts/nanobeacon/issues/1", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
57710937
Close the window when pressing the "esc" key This will close the current window when pressing the escape key. Except on Darwin this exits the app. Just to verify: does this also close the application on other platforms? Changes look good though, thanks! It's the same as pressing the [X] button on the window title. It closes the current window - on all platforms. If it's the last window it will quit the app except if os.platform === 'darwin' (OS X) which I assume is the standard behavior on OS X. But it doesn't close other windows. So it won't kill your other vmd windows if you happen to have multiple files open. :+1: let's merge it then! Published a 1.2.0 since it's sort of a change in behavior.
gharchive/pull-request
2015-02-14T23:35:56
2025-04-01T06:46:19.222670
{ "authors": [ "maxkueng", "yoshuawuyts" ], "repo": "yoshuawuyts/vmd", "url": "https://github.com/yoshuawuyts/vmd/pull/13", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
1885080468
一点点小建议 sqlite的配置改成: // 复杂结构参考 return [ /** 默认目录是app.php 里面配置的绝对路径 */ /** 这个是直接把表放在默认目录下面的,表设置了三个字段,id主键,name,created */ 'user'=>'id INTEGER PRIMARY KEY,name varhcar(24),created text(12)', /** 外层都是目录文件夹,位置都是相对于绝对路径的,表将会被放在目录 hello/world下面 */ 'hello'=>[ 'world'=>[ /** 这里定义了一个表 */ 'user'=>'id INTEGER PRIMARY KEY,name varhcar(24),created text(12)' ] ], // 插件保留目录 为了以后扩展使用 :memory: 内存数据库 'logs'=>[ 'error'=>'id INTEGER AUTOINCREMENT,sql text(500),status varhcar(24),created text(12)', ], ]; 连接sqlite /** 连接配置(方便用户明白是读取的配置文件),保存数据的文件名,表名称 */ $db = sqlite('hello.world','test','user'); 然后Sqlite.php文件在读取到'hello.world'的时候,判断是否存在这个目录,不存在就创建,个人感觉创建目录应该实例化SQLite3类之前。 我实际映射出sqlite的配置表参数的时候,是会帮 ‘/’ 转换成 ‘.’的,但是SQLite3('绝对路径'): 为了兼容 文件夹这种情况config/abc.com 带.的文件夹,所以第一个参数还是用相对路径 / ? 创建目录应该实例化SQLite3类之前。 : 确实应该,现在改过来了,放到首次加载配置种,只加载一次 Duplicate of #
gharchive/issue
2023-09-07T04:09:41
2025-04-01T06:46:19.267445
{ "authors": [ "2723659854", "youfeed" ], "repo": "youfeed/sqlite", "url": "https://github.com/youfeed/sqlite/issues/1", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
340869383
机器人连续运行一段时间后自动退出 在程序中设定了: bot.join() embed() 但是部署到服务器上运行了两天之后显示: LOG OUT! 自动退出了程序 我也是这个问题。。。请问你解决了吗 我也是这个问题。。。请问你解决了吗 同问,我最近被这个问题烦死了,一直没找到解决方案。 同问,我最近被这个问题烦死了,一直没找到解决方案。 现在有解决吗
gharchive/issue
2018-07-13T03:32:16
2025-04-01T06:46:19.269942
{ "authors": [ "BasuLee", "Diamond-py", "NKU-Nikoni", "YYJeffrey", "wangbingfei" ], "repo": "youfou/wxpy", "url": "https://github.com/youfou/wxpy/issues/317", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
1065138048
ContactPicker disables TexFields in SwiftUI view. When using the ContactPicker, my code does not allow me to access the TextFields.. Here's my code using your Picker. `struct AddNewRecipientView: View { @Environment(.managedObjectContext) var moc @Environment(.presentationMode) var presentationMode private let borderWidth: CGFloat = 1.0 @State private var lastName: String = "" @State private var firstName: String = "" @State private var addressLine1: String = "" @State private var addressLine2: String = "" @State private var city: String = "" @State private var state: String = "" @State private var zip: String = "" @State private var country: String = "" @State var showPicker = false init() { let navBarApperance = UINavigationBarAppearance() navBarApperance.largeTitleTextAttributes = [ .foregroundColor: UIColor.systemGreen, .font: UIFont(name: "ArialRoundedMTBold", size: 35)!] navBarApperance.titleTextAttributes = [ .foregroundColor: UIColor.systemGreen, .font: UIFont(name: "ArialRoundedMTBold", size: 20)!] UINavigationBar.appearance().standardAppearance = navBarApperance UINavigationBar.appearance().scrollEdgeAppearance = navBarApperance UINavigationBar.appearance().compactAppearance = navBarApperance } var body: some View { NavigationView { GeometryReader { geomtry in VStack { Spacer() HStack { VStack(alignment: .leading) { TextField("First Name", text: $firstName) .customTextField() } VStack(alignment: .leading) { TextField("Last Name", text: $lastName) .customTextField() } } TextField("Address Line 1", text: $addressLine1) .customTextField() TextField("Address Line 2", text: $addressLine2) .customTextField() HStack { TextField("City", text: $city) .customTextField() .frame(width: geomtry.size.width * 0.48) Spacer() TextField("ST", text: $state) .customTextField() .frame(width: geomtry.size.width * 0.18) Spacer() TextField("Zip", text: $zip) .customTextField() .frame(width: geomtry.size.width * 0.28) } TextField("Country", text: $country) .customTextField() Spacer() Spacer() } ContactPicker(showPicker: $showPicker, onSelectContact: {contact in firstName = contact.givenName lastName = contact.familyName if contact.postalAddresses.count > 0 { if let addressString = ( ((contact.postalAddresses[0] as AnyObject).value(forKey: "labelValuePair") as AnyObject).value(forKey: "value")) as? CNPostalAddress { // swiftlint:disable:next line_length let mailAddress = CNPostalAddressFormatter.string(from: addressString, style: .mailingAddress) addressLine1 = "\(addressString.street)" addressLine2 = "" city = "\(addressString.city)" state = "\(addressString.state)" zip = "\(addressString.postalCode)" country = "\(addressString.country)" print(mailAddress) } } else { addressLine1 = "No Address Provided" addressLine2 = "" city = "" state = "" zip = "" country = "" print("No Address Provided") } self.showPicker.toggle() }, onCancel: nil) } .padding([.leading, .trailing], 10 ) .navigationTitle("Recipient") .navigationBarItems(trailing: HStack { Button(action: { let contactsPermsissions = checkContactsPermissions() if contactsPermsissions == true { self.showPicker.toggle() } }, label: { Image(systemName: "magnifyingglass") .font(.largeTitle) .foregroundColor(.green) }) Button(action: { saveRecipient() self.presentationMode.wrappedValue.dismiss() }, label: { Image(systemName: "square.and.arrow.down") .font(.largeTitle) .foregroundColor(.green) }) Button(action: { self.presentationMode.wrappedValue.dismiss() }, label: { Image(systemName: "chevron.down.circle.fill") .font(.largeTitle) .foregroundColor(.green) }) } ) } } func saveRecipient() { print("Saving...") if firstName != "" { let recipient = Recipient(context: self.moc) recipient.firstName = firstName recipient.lastName = lastName recipient.addressLine1 = addressLine1.capitalized(with: NSLocale.current) recipient.addressLine2 = addressLine2.capitalized(with: NSLocale.current) recipient.state = state.uppercased() recipient.city = city.capitalized(with: NSLocale.current) recipient.zip = zip recipient.country = country.capitalized(with: NSLocale.current) } do { try moc.save() } catch let error as NSError { print("Save error: \(error), \(error.userInfo)") } } func checkContactsPermissions() -> Bool { let authStatus = CNContactStore.authorizationStatus(for: .contacts) switch authStatus { case .restricted: print("User cannot grant premission, e.g. perental controls are in force.") return false case .denied: print("User has denided permissions") // add a popup to say you have denied permissions return false case .notDetermined: print("you need to request authorization via the API now") case .authorized: print("already authorized") @unknown default: print("unknown error") return false } let store = CNContactStore() if authStatus == .notDetermined { store.requestAccess(for: .contacts) {success, error in if !success { print("Not authorized to access contacts. Error = \(String(describing: error))") exit(1) } print("Authorized") } } return true } }` This appears to be a problem with my code, closing issue.
gharchive/issue
2021-11-27T23:11:27
2025-04-01T06:46:19.274961
{ "authors": [ "TheApApp" ], "repo": "youjinp/SwiftUIKit", "url": "https://github.com/youjinp/SwiftUIKit/issues/30", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1088988446
Fix empty image issue by considering status code &separate WEBCAMS (dict) as a json file Thanks for your amazing code to get the images from roundshot! When I run the code to extract starting from 2021-12-25_00-00, the original script may create an empty image when the status code equals 404. So I mainly update in following aspects add an if-else condition to determine the status of the response. separate WEBCAMS (dict) from main.py into webcams.json to increase scalability so as to add more links. Some minor changes are: format the code with black update img_path prefix with quality check img_path existing or not before GET request add time.sleep(10) to avoid 429 Too Many Requests error thank you a lot for the merge request! To be honest, I was aware that the code was a little broken, but did not have have enough motivation to fix it 😅 No worries🙌Thanks a lot for your codebase so that I could have a quick start to extract images from roundshot to construct personal dataset for research use. What are you researching, if you don't mind me asking? It sounds quite interesting. It is related to place recognition for robots. I would like to extract the images in different seasons to construct a new database. Thanks again for your code 😄
gharchive/pull-request
2021-12-27T06:02:21
2025-04-01T06:46:19.280229
{ "authors": [ "hibetterheyj", "youngtrashbag" ], "repo": "youngtrashbag/wildspitz-webcam-scraper", "url": "https://github.com/youngtrashbag/wildspitz-webcam-scraper/pull/2", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
836815253
Conexión con la API de GitHub I want to be able to connect with the GitHub API. [x] Get an access token [x] Make the first GET request [x] Prepare a postman request collection Se ha hecho lo siguiente: [x] creado el endpoint en el server para recibir el access_token de la api de github [x] creado el endoint callback para gestionar el oauth [x] creado una postman-collection con las llamadas de prueba que se han realizado
gharchive/issue
2021-03-20T13:57:19
2025-04-01T06:46:19.292588
{ "authors": [ "youssefmzouri" ], "repo": "youssefmzouri/auto-cv", "url": "https://github.com/youssefmzouri/auto-cv/issues/1", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
195353600
Topo-related changes. The end goal is to refactor the topology server code so it's a much smaller set of calls, easier to maintain, more consistent across implementations, and less error prone. This PR does the following: Introduce topodata.CellInfo. Stored in the global topology server, it describes a connection to a topology server for a cell. It has both a server address and a root directory to use. Also adding vtctl commands to deal with it. Existing zk and etcd implementations ignore these flags for now. changing the topo server registration to use a Factory method. It creates a topology server implementation with a server address and a root path. Existing zk and etcd implementations ignore these flags for now. adding a zk2 topology implementation. It doesn't use any of the go/zk code, and allows the specification of a root directory for both the global cell, and each individual cell. It also is using a different directory structure, consistent with what we want all new topo implementations to use. And it stores the data as protobuf, not json. removing old janitor code, been replaced by topo validator workflow. @michael-berlin this is ready for initial review. Some integration tests don't pass yet, so I probably still have a few details to go. Let me know what you think about keeping the default to 'zk2' for our integration tests, vs reverting to zookeeper. In any case, I'll change travis to have one test at least still run the other.
gharchive/pull-request
2016-12-13T20:02:56
2025-04-01T06:46:19.295811
{ "authors": [ "alainjobart" ], "repo": "youtube/vitess", "url": "https://github.com/youtube/vitess/pull/2367", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
200278546
404配置不成功 按照教程,404的路径配置路径为:blog/source/404.html, 404.html的内容为: layout: false title: "温斯渤 | 404" --- 这是我的博客请问如何解决,谢谢。 http://www.wensibo.top/404.html 页面没问题,应该跟你的 WEB 服务器 404 配置有关。
gharchive/issue
2017-01-12T04:56:41
2025-04-01T06:46:19.359551
{ "authors": [ "Wensibob", "yscoder" ], "repo": "yscoder/hexo-theme-indigo", "url": "https://github.com/yscoder/hexo-theme-indigo/issues/161", "license": "unlicense", "license_type": "permissive", "license_source": "bigquery" }
1281969899
ABC (Go) Checklist [X] I'm reporting a broken site [X] I've verified that I'm running yt-dlp version 2022.06.22.1 (update instructions) or later (specify commit) [X] I've checked that all provided URLs are playable in a browser with the same IP and same login details [X] I've checked that all URLs and arguments with special characters are properly quoted or escaped [X] I've searched the bugtracker for similar issues including closed ones. DO NOT post duplicates [X] I've read the guidelines for opening an issue [ ] I've read about sharing account credentials and I'm willing to share it if required Region US Description Video plays fine within browser using VPN but get warning of: [Go] 1002:You appear to be outside the United States or its territories. Due to international rights agreements, we only offer this video to viewers located within the United States and its territories. Tried no-geo-bypass without success. https://abc.com/shows/good-morning-america/episode-guide/2022-05/16-monday-may-16-2022 Thanks for any help. Verbose log [debug] Command-line config: ['--proxy=redacted', 'https://abc.com/shows/good-morning-america/episode-guide/2022-05/16-monday-may-16-2022', '--no-geo-bypass', '-v'] [debug] User config "/home/remlap/.config/yt-dlp.conf": ['--hls-prefer-native', '--add-metadata', '--sub-langs', 'all', '--embed-subs'] [debug] Encodings: locale UTF-8, fs utf-8, pref UTF-8, out utf-8 (No ANSI), error utf-8, screen utf-8 (No ANSI) [debug] yt-dlp version 2022.06.22.1 [a86e01e] [debug] Python version 3.10.4 (CPython 64bit) - Linux-5.15.0-40-generic-x86_64-with-glibc2.35 [debug] Checking exe version: ffmpeg -bsfs [debug] Checking exe version: ffprobe -bsfs [debug] exe versions: ffmpeg 4.4.2 (setts), ffprobe 4.4.2, rtmpdump 2.4 [debug] Optional libraries: Cryptodome-3.11.0, brotli-1.0.9, certifi-2020.06.20, mutagen-1.45.1, pyxattr-0.7.2, secretstorage-3.3.1, sqlite3-2.6.0, websockets-9.1 [debug] Proxy map: {'http': 'redacted'} [debug] [Go] Extracting URL: https://abc.com/shows/good-morning-america/episode-guide/2022-05/16-monday-may-16-2022 ERROR: [Go] 1002:You appear to be outside the United States or its territories. Due to international rights agreements, we only offer this video to viewers located within the United States and its territories. This video is available in United States. You might want to use a VPN or a proxy server (with --proxy) to workaround. Traceback (most recent call last): File "/home/remlap/.local/lib/python3.10/site-packages/yt_dlp/extractor/common.py", line 647, in extract ie_result = self._real_extract(url) File "/home/remlap/.local/lib/python3.10/site-packages/yt_dlp/extractor/go.py", line 252, in _real_extract self.raise_geo_restricted( File "/home/remlap/.local/lib/python3.10/site-packages/yt_dlp/extractor/common.py", line 1114, in raise_geo_restricted raise GeoRestrictedError(msg, countries=countries) yt_dlp.utils.GeoRestrictedError: 1002:You appear to be outside the United States or its territories. Due to international rights agreements, we only offer this video to viewers located within the United States and its territories. During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/home/remlap/.local/lib/python3.10/site-packages/yt_dlp/YoutubeDL.py", line 1427, in wrapper return func(self, *args, **kwargs) File "/home/remlap/.local/lib/python3.10/site-packages/yt_dlp/YoutubeDL.py", line 1497, in __extract_info ie_result = ie.extract(url) File "/home/remlap/.local/lib/python3.10/site-packages/yt_dlp/extractor/common.py", line 673, in extract raise type(e)(e.orig_msg, **kwargs) yt_dlp.utils.GeoRestrictedError: [Go] 1002:You appear to be outside the United States or its territories. Due to international rights agreements, we only offer this video to viewers located within the United States and its territories Closing it must be my vpn or something works on another.
gharchive/issue
2022-06-23T07:50:28
2025-04-01T06:46:19.378214
{ "authors": [ "remlap" ], "repo": "yt-dlp/yt-dlp", "url": "https://github.com/yt-dlp/yt-dlp/issues/4148", "license": "Unlicense", "license_type": "permissive", "license_source": "github-api" }
1283267283
Disable YouTube Checklist [X] I'm asking a question and not reporting a bug or requesting a feature [X] I've looked through the README [X] I've verified that I'm running yt-dlp version 2022.06.22.1 (update instructions) or later (specify commit) [X] I've searched the bugtracker for similar questions including closed ones. DO NOT post duplicates [X] I've read the guidelines for opening an issue Question Is there any option to disable download YouTube video? Some site use YouTube video embed frame in their site and if I use that site URL in yt-dlp then that YouTube video downloads. I want to disable this type of video download. In short, If YouTube video URL found in webpage, then skip download. If this is doable, then please guide me. Verbose log No response --match-filter 'extractor!~=youtube' @flashdagger Thank you very much. It is working as expected.
gharchive/issue
2022-06-24T05:07:15
2025-04-01T06:46:19.383252
{ "authors": [ "adapana", "flashdagger" ], "repo": "yt-dlp/yt-dlp", "url": "https://github.com/yt-dlp/yt-dlp/issues/4160", "license": "Unlicense", "license_type": "permissive", "license_source": "github-api" }
1936524265
Consider extending the information shown on the commandline when the main video (larger video) can not be downloaded and instead some smaller video is downloaded on e. g. redtube or other websites that show a similar problem DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE [X] I understand that I will be blocked if I intentionally remove or skip any mandatory* field Checklist [X] I'm requesting a feature unrelated to a specific site [X] I've looked through the README [X] I've verified that I'm running yt-dlp version 2023.10.07 (update instructions) or later (specify commit) [X] I've searched known issues and the bugtracker for similar issues including closed ones. DO NOT post duplicates [X] I've read the guidelines for opening an issue Provide a description that is worded well enough to be understood So I am using yt-dlp and for youtube it works very well. However had, on other websites there are some issues. For instance, on redtube (you know, that site for ... educational content and stuff), I consistently now seem to get just a tiny ~9 seconds download of an intro but the main video is not downloaded. Using an extension like Video downloader in chrome also does not show the URL; or it shows a "Forbidden" entry. Ok this is mostly the status quo. At the same time I know that there has to be a way that I can view the video as-is, because it plays in my browser (thorium, which is based on chrome) just fine. Yet I can not get yt-dlp to download the main file directly. On the issue tracker there are various older reports where people recommend one to run yt-dlp via: -vU I tried this. Unfortunately that did not really give me a lot of useful information. I am not going to provide a specific URL ... can't reveal that educational purpose as-is, but it affects pretty much any video on redtube for me. So here is the slightly modified output: [debug] Command-line config: ['-vU', 'https://www.redtube.com/SOME_NUMBER_HERE'] [debug] Encodings: locale UTF-8, fs utf-8, pref UTF-8, out utf-8, error utf-8, screen utf-8 [debug] yt-dlp version stable@2023.10.07 [377e85a17] [debug] Lazy loading extractors is disabled [debug] Python 3.12.0 (CPython x86_64 64bit) - Linux-6.1.38-1-MANJARO-x86_64-with-glibc2.37 (OpenSSL 3.1.1 30 May 2023, glibc 2.37) [debug] exe versions: ffmpeg N-112362-g4c422de1db (fdk,setts), ffprobe N-112362-g4c422de1db, rtmpdump 2.4 [debug] Optional libraries: Cryptodome-3.19.0, brotli-1.1.0, certifi-2023.07.22, mutagen-1.47.0, sqlite3-3.43.2, websockets-11.0.3 [debug] Proxy map: {} [debug] Loaded 1886 extractors [debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp/releases/latest Available version: stable@2023.10.07, Current version: stable@2023.10.07 yt-dlp is up to date (stable@2023.10.07) [RedTube] Extracting URL: https://www.redtube.com/SOME_NUMBER_HERE [RedTube] SOME_NUMBER_HERE: Downloading webpage [debug] Formats sorted by: hasvid, ie_pref, lang, quality, res, fps, hdr:12(7), vcodec:vp9.2(10), channels, acodec, size, br, asr, proto, vext, aext, hasaud, source, id [debug] Default format spec: bestvideo*+bestaudio/best [info] SOME_NUMBER_HERE: Downloading 1 format(s): 0 [debug] Invoking http downloader on "data:image/gif;base64,<data>" [download] Destination: A famous person tries to do something legendary really [SOME_NUMBER_HERE].mp4 [download] 100% of 42.00B in 00:00:00 at 30.33KiB/s The thing is that this does not really tell me much at all. So now comes my feature request: Would it be possible for yt-dlp to give more information in such an event? For instance, the download is only of a 9 seconds video rather than, say, 10 minutes. So I seem to only be able to download that small video (like a decoy). Perhaps yt-dlp could show one or two sentences about this; ideally also how to work around this IF a solution exists. Not everyone will scan through old issues to find out how to solve stuff. Right now I also don't know the reason; I assume the remote server somehow blocks yt-dlp, but the video is being played in the browser, so data must have been transferred. Perhaps other useful information could be shown here too; I may not be the only one to struggle with this. In general I think yt-dlp is awesome and I'd love to see it become more of a general downloader of multimedia files, not just from youtube. For youtube it works very well but so many other websites probably use a similar restriction (e. g. the "Forbidden" stuff) and I'd love to just use one tool (yt-dlp) for the job. Interestingly yt-dlp also works for videos on youtube where one has to log in normally, so perhaps that code can also be extended towards what other websites use (I really don't know how that block works, I need to research this eventually since I keep on running into that issue). At any rate, please feel free to ignore this issue if it is not applicable. Provide verbose output that clearly demonstrates the problem [X] Run your yt-dlp command with -vU flag added (yt-dlp -vU <your command line>) [X] If using API, add 'verbose': True to YoutubeDL params instead [X] Copy the WHOLE output (starting with [debug] Command-line config) and insert it below Complete Verbose Output No. yt-dlp is already a general video/audio downloader, which supports over 1000 websites (admittedly most of them are broken, because websites change so often) I can narrow down your issue into 2 problems: yt-dlp should give a warning for tiny downloads (42 bytes in your example log) the redtube extractor is broken and downloads the wrong video It should be noted that yt-dlp does not support piracy sites. However, it seems like redtube is not a piracy site, it's just nsfw and hence you didn Sorry, I accidentally clicked "send" too early. Please refer to https://github.com/yt-dlp/yt-dlp/issues/8324 to see my full message This looks like an extractor bug, but we cannot be sure, because: I am not going to provide a specific URL Without this and an unedited verbose log, nothing can be fixed or improved Duplicate of #7659
gharchive/issue
2023-10-11T00:30:08
2025-04-01T06:46:19.397474
{ "authors": [ "bashonly", "gamer191", "rubyFeedback" ], "repo": "yt-dlp/yt-dlp", "url": "https://github.com/yt-dlp/yt-dlp/issues/8324", "license": "Unlicense", "license_type": "permissive", "license_source": "github-api" }
473999050
Embedding subtitles ends in ERROR: Conversion failed! Checklist [x] I'm reporting a broken site support issue [x] I've verified that I'm running youtube-dl version 2019.07.27 [x] I've checked that all provided URLs are alive and playable in a browser [x] I've checked that all URLs and arguments with special characters are properly quoted or escaped [x] I've searched the bugtracker for similar bug reports including closed ones [x] I've read bugs section in FAQ Verbose log ./youtube-sync update PursuitofWonder [debug] System config: [] [debug] User config: [] [debug] Custom config: [] [debug] Command-line args: [u'-v', u'-i', u'-f', u'22/136+bestaudio[ext=m4a]/bestvideo[height<=720]+bestaudio[ext=m4a]/best[height<=720]', u'--all-subs', u'--embed-subs', u'--merge-output-format', u'mkv', u'-o', u'SYNC/PursuitofWonder/ID/%(id)s.mkv', u'https://www.youtube.com/channel/UC-tLyAaPbRZiYrOJxAGB7dQ'] [debug] Encodings: locale UTF-8, fs UTF-8, out UTF-8, pref UTF-8 [debug] youtube-dl version 2019.07.27 [debug] Python version 2.7.15+ (CPython) - Linux-4.15.0-55-generic-x86_64-with-neon-18.04-bionic [debug] exe versions: ffmpeg 3.4.6, ffprobe 3.4.6 [debug] Proxy map: {} [youtube:channel] UC-tLyAaPbRZiYrOJxAGB7dQ: Downloading channel page [youtube:playlist] UU-tLyAaPbRZiYrOJxAGB7dQ: Downloading webpage [download] Downloading playlist: Uploads from Pursuit of Wonder [youtube:playlist] UU-tLyAaPbRZiYrOJxAGB7dQ: Downloading page #1 [youtube:playlist] playlist Uploads from Pursuit of Wonder: Downloading 118 videos [download] Downloading video 1 of 118 [youtube] oGVhOWqsBWM: Downloading webpage [youtube] oGVhOWqsBWM: Downloading video info webpage [info] Writing video subtitles to: SYNC/PursuitofWonder/ID/oGVhOWqsBWM.en.vtt [debug] Invoking downloader on u'https://r1---sn-h0jeened.googlevideo.com/videoplayback?expire=1564423155&ei=k98-Xd3lH9mF7gOZk6SQCg&ip=188.192.204.78&id=o-AMMblYLeUu8C8082Av5b49QnEVTl9J6V0JioNZ4OeTHg&itag=22&source=youtube&requiressl=yes&mm=31%2C26&mn=sn-h0jeened%2Csn-4g5ednss&ms=au%2Conr&mv=m&mvi=0&pl=24&initcwndbps=1963750&mime=video%2Fmp4&ratebypass=yes&dur=354.290&lmt=1563856956947519&mt=1564401462&fvip=1&beids=9466586&c=WEB&txp=2316222&sparams=expire%2Cei%2Cip%2Cid%2Citag%2Csource%2Crequiressl%2Cmime%2Cratebypass%2Cdur%2Clmt&sig=ALgxI2wwRQIgcXQaxIjXihc8-NaINMoDL1kOQdNSGuCbEKqDQ4FqKsgCIQCxpUkaDH9VusAqpreRIFrqBBWApOWjAjO08KwLHOR0Qg%3D%3D&lsparams=mm%2Cmn%2Cms%2Cmv%2Cmvi%2Cpl%2Cinitcwndbps&lsig=AHylml4wRQIhALBSWb2iN_YwaZjUQ8CpCC2Bi9gwvMlKDfJI67u1kzeoAiBThssiapqK1wnQuWlikZtCj98tInS5meAh5kf0DKobjw%3D%3D' [download] Destination: SYNC/PursuitofWonder/ID/oGVhOWqsBWM.mkv [download] 100% of 46.87MiB in 00:02 [ffmpeg] Embedding subtitles in 'SYNC/PursuitofWonder/ID/oGVhOWqsBWM.mkv' [debug] ffmpeg command line: ffmpeg -y -loglevel 'repeat+info' -i 'file:SYNC/PursuitofWonder/ID/oGVhOWqsBWM.mkv' -i 'file:SYNC/PursuitofWonder/ID/oGVhOWqsBWM.en.vtt' -map 0 -c copy -map '-0:s' -map '-0:d' '-c:s' mov_text -map '1:0' '-metadata:s:s:0' 'language=eng' 'file:SYNC/PursuitofWonder/ID/oGVhOWqsBWM.temp.mkv' ERROR: Conversion failed! Traceback (most recent call last): File "/usr/local/bin/youtube-dl/youtube_dl/YoutubeDL.py", line 2054, in post_process files_to_delete, info = pp.run(info) File "/usr/local/bin/youtube-dl/youtube_dl/postprocessor/ffmpeg.py", line 426, in run self.run_ffmpeg_multiple_files(input_files, temp_filename, opts) File "/usr/local/bin/youtube-dl/youtube_dl/postprocessor/ffmpeg.py", line 235, in run_ffmpeg_multiple_files raise FFmpegPostProcessorError(msg) FFmpegPostProcessorError: Conversion failed! [download] Downloading video 2 of 118 [youtube] V2BrmsAYpI0: Downloading webpage [youtube] V2BrmsAYpI0: Downloading video info webpage WARNING: video doesn't have subtitles [debug] Invoking downloader on u'https://r3---sn-h0jeenl7.googlevideo.com/videoplayback?expire=1564423159&ei=l98-XZmZJcyL7gOa1JDABQ&ip=188.192.204.78&id=o-AJQU9EDlMg-bO9hSvUh_cG83TxH1128o6ZCTd7XjoVhu&itag=22&source=youtube&requiressl=yes&mm=31%2C26&mn=sn-h0jeenl7%2Csn-4g5ednsd&ms=au%2Conr&mv=m&mvi=2&pl=24&initcwndbps=1876250&mime=video%2Fmp4&ratebypass=yes&dur=631.954&lmt=1563261094104028&mt=1564401462&fvip=3&beids=9466586&c=WEB&txp=4432432&sparams=expire%2Cei%2Cip%2Cid%2Citag%2Csource%2Crequiressl%2Cmime%2Cratebypass%2Cdur%2Clmt&sig=ALgxI2wwRQIgNysZ2mUMfPREHGAtmaLJLdBPa8w9SQXNlM3J252LEUACIQDPUBnADrdgzG-MoRMVl8D_gFMv_H0Xd1Zn8xKw_5G9jg%3D%3D&lsparams=mm%2Cmn%2Cms%2Cmv%2Cmvi%2Cpl%2Cinitcwndbps&lsig=AHylml4wRgIhAOVZ82hz82luNwDCX381znP6IfL3U26ZquW2rnViya37AiEAthc9Q4yad5IHBqQtoiJZHti_9Cxz6W9iN6VMmRlsbNo%3D' [download] Destination: SYNC/PursuitofWonder/ID/V2BrmsAYpI0.mkv [download] 42.1% of 76.03MiB at 22.93MiB/s ETA 00:01^C ERROR: Interrupted by user (1/2) Updating metadata: oGVhOWqsBWM... [title[debug] System config: [] [debug] User config: [] [debug] Custom config: [] [debug] Command-line args: [u'-v', u'--get-filename', u'-o', u'%(title)s', u'--', u'oGVhOWqsBWM'] [debug] Encodings: locale UTF-8, fs UTF-8, out None, pref UTF-8 [debug] youtube-dl version 2019.07.27 [debug] Python version 2.7.15+ (CPython) - Linux-4.15.0-55-generic-x86_64-with-neon-18.04-bionic [debug] exe versions: ffmpeg 3.4.6, ffprobe 3.4.6 [debug] Proxy map: {} ^C ERROR: Interrupted by user Description When i try to download all videos of a channel via youtube-sync i get the error "Conversion failed!" at the step "[ffmpeg] Embedding subtitles in 'SYNC/PursuitofWonder/ID/oGVhOWqsBWM.mkv'". Is this error youtube-dl or ffmpeg related? How to fix this? The url of the video is https://www.youtube.com/watch?v=oGVhOWqsBWM and the quality setting i use to download is "22/136+bestaudio[ext=m4a]/bestvideo[height<=720]+bestaudio[ext=m4a]/best[height<=720]" WHY incomplete? What is missing? WHY incomplete? What is missing? @dstftw Post the full verbose output. Post the full verbose output. This IS the Full verbose output...i dont got more!!!! No, it's not. Read new issue template. No, it's not. Read new issue template. ok updated. check above Do not hardcode extension in output template.
gharchive/issue
2019-07-29T11:30:40
2025-04-01T06:46:19.407239
{ "authors": [ "dstftw", "thrdroom" ], "repo": "ytdl-org/youtube-dl", "url": "https://github.com/ytdl-org/youtube-dl/issues/21929", "license": "Unlicense", "license_type": "permissive", "license_source": "github-api" }
699568202
How can i put the url on cmd for the login? Checklist [X ] I'm asking a question [ ] I've looked through the README and FAQ for similar questions [ ] I've searched the bugtracker for similar questions including closed ones Question Hi, i don't understand how can i login on a website, how can i put the url on cmd for the login? Thanks See the Authentication Options section of the README. If that doesn't work you will need to provide more information, including the site that you're attempting to download from.
gharchive/issue
2020-09-11T17:32:16
2025-04-01T06:46:19.410210
{ "authors": [ "sesh", "smule98" ], "repo": "ytdl-org/youtube-dl", "url": "https://github.com/ytdl-org/youtube-dl/issues/26568", "license": "Unlicense", "license_type": "permissive", "license_source": "github-api" }
1084048514
discovery plus not working anymore Hello, i wanted to download as usual from Discovery plus but now it says "Unsupported URL" the url is "https://www.discoveryplus.com/it/video/i-signori-della-neve/stagione-2-episodio-1-i-preparativi" Can anyone kindly help? Duplicate of #29144; see #29294. I believe this is implemented in https://github.com/yt-dlp/yt-dlp/releases, if you can run that. Hello, i wanted to download as usual from Discovery plus but now it says "Unsupported URL" the url is "https://www.discoveryplus.com/it/video/i-signori-della-neve/stagione-2-episodio-1-i-preparativi" Can anyone kindly help? try again using the cookies option it still works D:\ytdl>yt-dlp.exe -F https://www.discoveryplus.com/it/video/i-signori-della-neve/stagione-2-episodio-1-i-preparativi [DiscoveryPlus] i-signori-della-neve/stagione-2-episodio-1-i-preparativi: Downloading JSON metadata ERROR: [DiscoveryPlus] i-signori-della-neve/stagione-2-episodio-1-i-preparativi: This video is only available for registered users. You may want to use --cookies. yt-dlp includes the fix for new (as of earlier this year) Discovery VOD sites. yt-dl doesn't yet. Hello, i tried yt-dlp but unfortunately i still cannot understand how to get the video. BTW, this video is available for free and not for registered users only as yt-dlp is stating... how can i solve this?? Best to raise this at https://github.com/yt-dlp/yt-dlp/issues
gharchive/issue
2021-12-19T09:03:07
2025-04-01T06:46:19.414895
{ "authors": [ "BlackLuster13", "dirkf", "fedepr", "october262" ], "repo": "ytdl-org/youtube-dl", "url": "https://github.com/ytdl-org/youtube-dl/issues/30402", "license": "Unlicense", "license_type": "permissive", "license_source": "github-api" }
1274503879
Bug report: 429, then can't reinstall I am using Kali Linux uname -a Linux kali 5.17.0-kali3-amd64 #1 SMP PREEMPT Debian 5.17.11-1kali1 (2022-05-30) x86_64 GNU/Linux uname -m x86_64 I was using youtbe-dl to download something, it says: youtube-dl --proxy="http://127.0.0.1:7890" --yes-playlist -i https://youtu.be/ilw-qmqZ5zY?list=RDMDt1Ed_Qwlo --verbose [debug] System config: ['--proxy', 'http://127.0.0.1:8889/', '-o', '/home/xiaoxn/Videos/%(title)s.%(ext)s'] [debug] User config: [] [debug] Custom config: [] [debug] Command-line args: ['--proxy=http://127.0.0.1:7890', '--yes-playlist', '-i', 'https://youtu.be/ilw-qmqZ5zY?list=RDMDt1Ed_Qwlo', '--verbose'] [debug] Encodings: locale UTF-8, fs utf-8, out utf-8, pref UTF-8 [debug] youtube-dl version 2021.12.17 [debug] Python version 3.10.4 (CPython) - Linux-5.17.0-kali3-amd64-x86_64-with-glibc2.33 [debug] exe versions: ffmpeg 4.4.2-1, ffprobe 4.4.2-1 [debug] Proxy map: {'http': 'http://127.0.0.1:7890', 'https': 'http://127.0.0.1:7890'} [youtube:tab] Downloading playlist RDMDt1Ed_Qwlo - add --no-playlist to just download video ilw-qmqZ5zY [youtube:tab] RDMDt1Ed_Qwlo: Downloading webpage ERROR: Unable to download webpage: HTTP Error 429: Too Many Requests (caused by <HTTPError 429: 'Too Many Requests'>); please report this issue on https://yt-dl.org/b to update. Be sure to call youtube-dl with the --verbose flag and include its complete output. File "/usr/local/bin/youtube-dl/youtube_dl/extractor/common.py", line 634, in _request_webpage return self._downloader.urlopen(url_or_request) File "/usr/local/bin/youtube-dl/youtube_dl/YoutubeDL.py", line 2288, in urlopen return self._opener.open(req, timeout=self._socket_timeout) File "/usr/lib/python3.10/urllib/request.py", line 525, in open response = meth(req, response) File "/usr/lib/python3.10/urllib/request.py", line 634, in http_response response = self.parent.error( File "/usr/lib/python3.10/urllib/request.py", line 557, in error result = self._call_chain(*args) File "/usr/lib/python3.10/urllib/request.py", line 496, in _call_chain result = func(*args) File "/usr/lib/python3.10/urllib/request.py", line 749, in http_error_302 return self.parent.open(new, timeout=req.timeout) File "/usr/lib/python3.10/urllib/request.py", line 525, in open response = meth(req, response) File "/usr/lib/python3.10/urllib/request.py", line 634, in http_response response = self.parent.error( File "/usr/lib/python3.10/urllib/request.py", line 563, in error return self._call_chain(*args) File "/usr/lib/python3.10/urllib/request.py", line 496, in _call_chain result = func(*args) File "/usr/lib/python3.10/urllib/request.py", line 643, in http_error_default raise HTTPError(req.full_url, code, msg, hdrs, fp) I tried to remove youtube-dl using sudo apt remove youtube-dl [sudo] password for xiaoxn: Reading package lists... Done Building dependency tree... Done Reading state information... Done Package 'youtube-dl' is not installed, so not removed 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. Then I run youtube-dl --version 2021.12.17 It seemed that I can not remove youtube-dl and I can not use it either. Regarding HTTP error 429, see pinned issue #23638. To get a working up-to-date yt-dl, do something like this: sudo rm -r /usr/local/bin/youtube-dl python3 -m pip install 'https://github.com/ytdl-org/youtube-dl/archive/refs/heads/master.tar.gz' sudo rm -r /usr/local/bin/youtube-dl best not to use rm commands when you don't have to. Just delete the binary (using the GUI) from /usr/local/bin/youtube-dl python3 -m pip install 'https://github.com/ytdl-org/youtube-dl/archive/refs/heads/master.tar.gz' OP is using python 3.10, so they're better off running python3 -m pip install --update yt-dlp (after reviewing https://github.com/yt-dlp/yt-dlp#differences-in-default-behavior) best not to use rm commands when you don't have to. Just delete the binary (using the GUI) from /usr/local/bin/youtube-dl (or delete the whole folder) Who says there's a GUI? At this screen, you may wish to not install a desktop environment, then Kali Linux becomes “headless” (no graphic interface) which uses less system resources up and commonly found on servers, dropboxes, low powered ARM devices, and the cloud. OP is using python 3.10, so they're better off running python3 -m pip install --update yt-dlp (after reviewing https://github.com/yt-dlp/yt-dlp#differences-in-default-behavior) That's entirely a matter for OP, though it would save us having to deal with the 429 issue.
gharchive/issue
2022-06-17T03:24:58
2025-04-01T06:46:19.422991
{ "authors": [ "beesuns", "dirkf", "gamer191" ], "repo": "ytdl-org/youtube-dl", "url": "https://github.com/ytdl-org/youtube-dl/issues/31032", "license": "Unlicense", "license_type": "permissive", "license_source": "github-api" }
1358782811
[bilibili] how to download a item in a list Checklist [ how to download: https://www.bilibili.com/video/BV1V54y1B7K3?p=17] I'm asking a question [ ] I've looked through the README and FAQ for similar questions [ ] I've searched the bugtracker for similar questions including closed ones Question WRITE QUESTION HERE Here is clickable URL: https://www.bilibili.com/video/BV1V54y1B7K3?p=17 --playlist-items ... RTFM ... This is probably a duplicate of https://github.com/ytdl-org/youtube-dl/issues/31051 ... Indeed, https://www.bilibili.com/video/BV1V54y1B7K3 is a playlist consisting of 18 items (parts), but when that URI is fed to youtube-dl, it is not treated as such: youtube-dl -v -F "https://www.bilibili.com/video/BV1V54y1B7K3" => [debug] System config: [] [debug] User config: [] [debug] Custom config: [] [debug] Command-line args: ['-v', '-F', 'https://www.bilibili.com/video/BV1V54y1B7K3'] [debug] Encodings: locale cp1253, fs mbcs, out cp737, pref cp1253 [debug] youtube-dl version 2022.09.01.19419 ** This build is unofficial daily builds, provided for ease of use. ** Please do not ask for any support. [debug] Python version 3.4.4 (CPython) - Windows-Vista-6.0.6003-SP2 [debug] exe versions: ffmpeg 4.4.1, ffprobe 4.4.1, phantomjs 2.1.1, rtmpdump 2.4 [debug] Proxy map: {} [BiliBili] 1V54y1B7K3: Downloading webpage [BiliBili] 1V54y1B7K3: Downloading video info page [info] Available formats for 1V54y1B7K3: format code extension resolution note 0 flv unknown 230.13MiB and even when issuing youtube-dl -v -F "https://www.bilibili.com/video/BV1V54y1B7K3?p=17" or (as suggested) youtube-dl -v -F "https://www.bilibili.com/video/BV1V54y1B7K3" --playlist-items 17 the outcome is identical: [debug] Proxy map: {} [BiliBili] 1V54y1B7K3: Downloading webpage [BiliBili] 1V54y1B7K3: Downloading video info page [info] Available formats for 1V54y1B7K3: format code extension resolution note 0 flv unknown 230.13MiB According to testimony in #31051, I expect the 230MiB file to correspond to the 1st item of that playlist (while OP wants item/part 17); out of curiosity, I DLed that file and it's a 1080p30 encode, with a duration of ca. 1h31m. Contrast all the above with how (latest) yt-dlp handles those URIs: yt-dlp "https://www.bilibili.com/video/BV1V54y1B7K3" --flat-playlist => BiliBili] 1V54y1B7K3: Grabbing original ID via API BiliBili] 840983192: Downloading webpage BiliBili] 840983192: Extracting videos in anthology BiliBili] Downloading anthology 840983192 - add --no-playlist to just download video download] Downloading playlist: ?????????? ?????? ?? ?????(???) BiliBili] Playlist ?????????? ?????? ?? ?????(???): Downloading 18 videos of 18 download] Downloading video 1 of 18 download] Downloading video 2 of 18 download] Downloading video 3 of 18 download] Downloading video 4 of 18 download] Downloading video 5 of 18 download] Downloading video 6 of 18 download] Downloading video 7 of 18 download] Downloading video 8 of 18 download] Downloading video 9 of 18 download] Downloading video 10 of 18 download] Downloading video 11 of 18 download] Downloading video 12 of 18 download] Downloading video 13 of 18 download] Downloading video 14 of 18 download] Downloading video 15 of 18 download] Downloading video 16 of 18 download] Downloading video 17 of 18 download] Downloading video 18 of 18 download] Finished downloading playlist: ?????????? ?????? ?? ?????(???) and yt-dlp -v -F "https://www.bilibili.com/video/BV1V54y1B7K3?p=17" => [BiliBili] 1V54y1B7K3: Grabbing original ID via API [BiliBili] 840983192: Downloading webpage [BiliBili] 840983192: Extracting videos in anthology [BiliBili] 840983192: Downloading tags [info] Available formats for 840983192_part17: ID EXT RESOLUTION | FILESIZE TBR PROTO | VCODEC VBR ACODEC ABR ------------------------------------------------------------------------------ 0 m4a audio only | ~ 46.22MiB 67k https | audio only mp4a.40.2 67k 1 m4a audio only | ~ 87.59MiB 127k https | audio only mp4a.40.2 127k 2 m4a audio only | ~ 87.59MiB 127k https | audio only mp4a.40.2 127k 3 mp4 640x360 | ~ 33.21MiB 48k https | av01.0.01M.08 48k video only 4 mp4 640x360 | ~ 36.15MiB 52k https | avc1.64001E 52k video only 5 mp4 852x480 | ~ 42.70MiB 62k https | av01.0.04M.08 62k video only 6 mp4 852x480 | ~ 49.83MiB 72k https | avc1.64001F 72k video only 7 mp4 1280x720 | ~ 67.65MiB 98k https | av01.0.05M.08 98k video only 8 mp4 1280x720 | ~ 82.54MiB 120k https | avc1.640028 120k video only 9 mp4 1920x1080 | ~109.79MiB 159k https | av01.0.08M.08 159k video only 10 mp4 1920x1080 | ~126.64MiB 184k https | avc1.640032 184k video only or yt-dlp -F "https://www.bilibili.com/video/BV1V54y1B7K3" -I 17 => [BiliBili] 1V54y1B7K3: Grabbing original ID via API [BiliBili] 840983192: Downloading webpage [BiliBili] 840983192: Extracting videos in anthology [BiliBili] Downloading anthology 840983192 - add --no-playlist to just download video [download] Downloading playlist: ?????????? ?????? ?? ?????(???) [BiliBili] Playlist ?????????? ?????? ?? ?????(???): Downloading 1 videos [download] Downloading video 1 of 1 [BiliBili] 1V54y1B7K3: Grabbing original ID via API [BiliBili] 840983192: Downloading webpage [BiliBili] 840983192: Extracting videos in anthology [BiliBili] 840983192: Downloading tags [info] Available formats for 840983192_part17: ID EXT RESOLUTION | FILESIZE TBR PROTO | VCODEC VBR ACODEC ABR ------------------------------------------------------------------------------ 0 m4a audio only | ~ 46.22MiB 67k https | audio only mp4a.40.2 67k 1 m4a audio only | ~ 87.59MiB 127k https | audio only mp4a.40.2 127k 2 m4a audio only | ~ 87.59MiB 127k https | audio only mp4a.40.2 127k 3 mp4 640x360 | ~ 33.21MiB 48k https | av01.0.01M.08 48k video only 4 mp4 640x360 | ~ 36.15MiB 52k https | avc1.64001E 52k video only 5 mp4 852x480 | ~ 42.70MiB 62k https | av01.0.04M.08 62k video only 6 mp4 852x480 | ~ 49.83MiB 72k https | avc1.64001F 72k video only 7 mp4 1280x720 | ~ 67.65MiB 98k https | av01.0.05M.08 98k video only 8 mp4 1280x720 | ~ 82.54MiB 120k https | avc1.640028 120k video only 9 mp4 1920x1080 | ~109.79MiB 159k https | av01.0.08M.08 159k video only 10 mp4 1920x1080 | ~126.64MiB 184k https | avc1.640032 184k video only [download] Finished downloading playlist: ?????????? ?????? ?? ?????(???) I used -f 10+2 to fetch item/part 17 (1080p30 encode again, ca. 1h34m) and it's different to the video file yt-dl fetched... So, unless I'm missing something obvious (I know it has happened before 😜 ), the answer to OP's question would be (i.e. if I were to supply it) that it's impossible to download referenced video via youtube-dl in its current state, because its bilibiliIE is not up to that task (requires an update) ... yt-dlp works out that the page with ID 1V54y1B7K3 is actually 840983192; but that page also gets just the single 230MB flv. Apparently a yt-dlp back-port would be good. Duplicate of #30151. Duplicate of https://github.com/ytdl-org/youtube-dl/issues/30151. Actually, as I wrote: duplicate of https://github.com/ytdl-org/youtube-dl/issues/31051 😜
gharchive/issue
2022-09-01T12:38:54
2025-04-01T06:46:19.435388
{ "authors": [ "89z", "Vangelis66", "Xueqc", "dirkf" ], "repo": "ytdl-org/youtube-dl", "url": "https://github.com/ytdl-org/youtube-dl/issues/31217", "license": "Unlicense", "license_type": "permissive", "license_source": "github-api" }
1591240997
ERROR: Unable to extract uploader id; please report this issu youtube-mp3 --verbose 'https://www.youtube.com/watch?v=SQTXL9bvG8U' [debug] System config: [] [debug] User config: [] [debug] Custom config: [] [debug] Command-line args: ['-x', '--audio-format', 'best', '--verbose', 'https://www.youtube.com/watch?v=SQTXL9bvG8U'] [debug] Encodings: locale UTF-8, fs utf-8, out utf-8, pref UTF-8 [debug] youtube-dl version 2021.12.17 [debug] Git HEAD: a222a5bbc [debug] Python version 3.10.10 (CPython) - macOS-13.2.1-arm64-arm-64bit [debug] exe versions: ffmpeg 5.1.2, ffprobe 5.1.2, rtmpdump 2.4 [debug] Proxy map: {} [youtube] SQTXL9bvG8U: Downloading webpage ERROR: Unable to extract uploader id; please report this issue on https://yt-dl.org/bug . Make sure you are using the latest version; see https://yt-dl.org/update on how to update. Be sure to call youtube-dl with the --verbose flag and include its complete output. Traceback (most recent call last): File "/opt/homebrew/Cellar/youtube-dl/2021.12.17/libexec/lib/python3.10/site-packages/youtube_dl/YoutubeDL.py", line 815, in wrapper return func(self, *args, **kwargs) File "/opt/homebrew/Cellar/youtube-dl/2021.12.17/libexec/lib/python3.10/site-packages/youtube_dl/YoutubeDL.py", line 836, in __extract_info ie_result = ie.extract(url) File "/opt/homebrew/Cellar/youtube-dl/2021.12.17/libexec/lib/python3.10/site-packages/youtube_dl/extractor/common.py", line 534, in extract ie_result = self._real_extract(url) File "/opt/homebrew/Cellar/youtube-dl/2021.12.17/libexec/lib/python3.10/site-packages/youtube_dl/extractor/youtube.py", line 1794, in _real_extract 'uploader_id': self._search_regex(r'/(?:channel|user)/([^/?&#]+)', owner_profile_url, 'uploader id') if owner_profile_url else None, File "/opt/homebrew/Cellar/youtube-dl/2021.12.17/libexec/lib/python3.10/site-packages/youtube_dl/extractor/common.py", line 1012, in _search_regex raise RegexNotFoundError('Unable to extract %s' % _name) youtube_dl.utils.RegexNotFoundError: Unable to extract uploader id; please report this issue on https://yt-dl.org/bug . Make sure you are using the latest version; see https://yt-dl.org/update on how to update. Be sure to call youtube-dl with the --verbose flag and include its complete output. Pinned issue: READ THIS BEFORE OPENING A NEW ISSUE! #30839 Known issues, fixed in git master code: YouTube: Unable to extract uploader id
gharchive/issue
2023-02-20T06:41:37
2025-04-01T06:46:19.444803
{ "authors": [ "mdahm", "nicolaasjan" ], "repo": "ytdl-org/youtube-dl", "url": "https://github.com/ytdl-org/youtube-dl/issues/31619", "license": "Unlicense", "license_type": "permissive", "license_source": "github-api" }
75769326
pbs video downloads stopped working with "ERROR: m3u8 download detected but ffmpeg or avconv could not be found." Problem: Fetching pbs videos used to succeed (using a mp4 from alternate_encoding, which is still there), but started failing sometime in the last few months. Fix: Probably a better solution than what I tried would be to discover the downloaders for the available formats and give them a method that allows them to reject it without attempting the download, and walk backwards through the available formats and checking that method before selecting the last entry in available_formats as the preferred encoding. Proposed solution: I have a simpler hack patch, which removes all m3u8 files from the available_formats if ffmpeg is not available. This is not a good way to leave the knowledge inside the downloaders, but after this patch, the same command succeeds: $ git diff --staged youtube_dl/YoutubeDL.py diff --git a/youtube_dl/YoutubeDL.py b/youtube_dl/YoutubeDL.py index 4cf83c5..8633a6f 100755 --- a/youtube_dl/YoutubeDL.py +++ b/youtube_dl/YoutubeDL.py @@ -914,6 +914,9 @@ class YoutubeDL(object): if not available_formats: return None + if not FFmpegMergerPP(self).available: + available_formats = filter(lambda x: x.get(u'protocol') != u'm3u8', available_formats) + if format_spec in ['best', 'worst', None]: format_idx = 0 if format_spec == 'worst' else -1 audiovideo_formats = [ $ Verbose output: Here's the output of a sample run, demonstrating the problem (presumably must not have ffmpeg installed): $ youtube-dl/youtube_dl/__main__.py --verbose http://video.pbs.org/video/2365472415/ [debug] System config: [] [debug] User config: [] [debug] Command-line args: [u'--verbose', u'http://video.pbs.org/video/2365472415/'] [debug] Encodings: locale UTF-8, fs utf-8, out UTF-8, pref UTF-8 [debug] youtube-dl version 2015.05.10 [debug] Git HEAD: c1c924a [debug] Python version 2.7.6 - Darwin-14.3.0-x86_64-i386-64bit [debug] exe versions: none [debug] Proxy map: {} [PBS] 2365472415: Downloading JSON metadata [PBS] 2365472415: Downloading recommended_encoding video url info [PBS] 2365472415: Downloading m3u8 information [PBS] 2365472415: Downloading alternate_encoding video url info [debug] Invoking downloader on u'http://ga.video.cdn.pbs.org/videos/nova/c21a548f-9003-4a20-aebf-58865db2084c/181693/hd-mezzanine-16x9/069e992c_nova_4208_test_rez-16x9-hls-2500k.m3u8' [download] Destination: Invisible Universe Revealed-2365472415.mp4 ERROR: m3u8 download detected but ffmpeg or avconv could not be found. Please install one. File "youtube-dl/youtube_dl/__main__.py", line 19, in <module> youtube_dl.main() File "/private/tmp/youtube-dl/youtube_dl/__init__.py", line 399, in main _real_main(argv) File "/private/tmp/youtube-dl/youtube_dl/__init__.py", line 389, in _real_main retcode = ydl.download(all_urls) File "/private/tmp/youtube-dl/youtube_dl/YoutubeDL.py", line 1483, in download res = self.extract_info(url) File "/private/tmp/youtube-dl/youtube_dl/YoutubeDL.py", line 660, in extract_info return self.process_ie_result(ie_result, download, extra_info) File "/private/tmp/youtube-dl/youtube_dl/YoutubeDL.py", line 706, in process_ie_result return self.process_video_result(ie_result, download=download) File "/private/tmp/youtube-dl/youtube_dl/YoutubeDL.py", line 1154, in process_video_result self.process_info(new_info) File "/private/tmp/youtube-dl/youtube_dl/YoutubeDL.py", line 1416, in process_info success = dl(filename, info_dict) File "/private/tmp/youtube-dl/youtube_dl/YoutubeDL.py", line 1358, in dl return fd.download(name, info) File "/private/tmp/youtube-dl/youtube_dl/downloader/common.py", line 342, in download return self.real_download(filename, info_dict) File "/private/tmp/youtube-dl/youtube_dl/downloader/hls.py", line 27, in real_download self.report_error('m3u8 download detected but ffmpeg or avconv could not be found. Please install one.') File "/private/tmp/youtube-dl/youtube_dl/downloader/common.py", line 155, in report_error self.ydl.report_error(*args, **kargs) File "/private/tmp/youtube-dl/youtube_dl/YoutubeDL.py", line 527, in report_error self.trouble(error_message, tb) File "/private/tmp/youtube-dl/youtube_dl/YoutubeDL.py", line 489, in trouble tb_data = traceback.format_list(traceback.extract_stack()) I think that covers everything the guidelines asked for. Cheers. (edit: added markdown to get raw text to show up better) Issue should be closed, URL is dead: https://www.pbs.org/video/nova-invisible-universe-revealed/
gharchive/issue
2015-05-12T22:54:25
2025-04-01T06:46:19.449258
{ "authors": [ "89z", "GrumpyOldTroll" ], "repo": "ytdl-org/youtube-dl", "url": "https://github.com/ytdl-org/youtube-dl/issues/5683", "license": "Unlicense", "license_type": "permissive", "license_source": "github-api" }
1354851710
[youtube] add published_time for --flat_playlist JSON When getting JSON using "--flat-playlist", it misses any information about video age. It is very desirable to have such information. This PR adds "published_time" to the JSON. Example: youtube-dl --flat-playlist -j ytsearch10:"egg recipe" It would be useful. But... The text value should be converted to a timestamp, as is done in yt-dlp: however there are several dependencies of that code; we'd have to pull in some further methods of YoutubeBaseInfoExtractor as well as some non-trivial routines from utils.py. As other PRs need those routines, I'll try to get them merged to simplify the job. Well, it would definitely be very good to have a timestamp. Unfortunately, Youtube's publishedTimeText just contains broad terms like "5 weeks ago" or "4 months ago". So converting that to a timestamp doesn't make much sense. Anyway, it is still very useful to add this information (even as text) to know how old the video is. (If there is a more precise date/time elsewhere it would be great). The publishedTimeText can provide the highest resolution that YT offers if the video is new ("1 hour ago", eg). When each video page is actually processed there is an upload_date, but I believe the pTT is the only field available when the playlist is being processed. Anyhow, converting this relative time text to a datetime is what the yt-dlp code that I mentioned does.
gharchive/pull-request
2022-08-29T20:43:45
2025-04-01T06:46:19.453697
{ "authors": [ "blueowl04", "dirkf" ], "repo": "ytdl-org/youtube-dl", "url": "https://github.com/ytdl-org/youtube-dl/pull/31213", "license": "Unlicense", "license_type": "permissive", "license_source": "github-api" }
1636186560
Data Accessibility and Availability Hi @yuGithuuub , Congratulations on this nice publication! As I am very intrigued by your findings and would like to take a closer look at your data, I have some issues to access the spatial images from your publication. It is mentioned in the data availability section of the manuscript that "All processed data have been uploaded to the figshare database(https://www.nature.com/articles/s41597-022-01676-w#ref-CR26). These data include filtered_feature_bc_matrix.h5, cloupe file and spatial files." However in the figshare directory for this article, I could not find neither filtered_feature_bc_matrix.h5 or spatial files for this data. I wonder if you could kindly updated your figshare repository. Thank you! Thank you for your promptly reply, @yuGithuuub ! I could view the loupe file, but I couldn't load the loupe with Seurat or other packages to R. Also currently, only the integrated data folder has a .RData file but the integrated Seurat object doesn't actually contain @image assay slot. L5 or 18 don't have the .RData. Thank you! Hi DHGK I regenerated a link, please check if it meets your needs.(https://figshare.com/s/9540591b32f67a415735) Thank you again for following up! I could see that there are two shared files, but I could not actually download them. When I tried, I got this roor message: {"message": "Entity not found: file", "code": "EntityNotFound"} Hi DHGK: Sorry for what you've been through. Please try again if you can download successfully. (https://doi.org/10.6084/m9.figshare.22321447.v1) Best Sz Yu Thanks a lot! It works :)
gharchive/issue
2023-03-22T17:13:28
2025-04-01T06:46:20.106893
{ "authors": [ "DHGK", "yuGithuuub" ], "repo": "yuGithuuub/Normal_liver_visium", "url": "https://github.com/yuGithuuub/Normal_liver_visium/issues/1", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1902960987
Vs2019运行提示失败 您好 我vs2015上面可以正常运行,在vs2019上面 auto builder=TensorRTUniquePtrnvinfer1::IBuilder(nvinfer1::createInferBuilder(blogger.getTRLogger()); 这一步结果是empty 请问是因为模型和vs2019不兼容吗 @lxh131419 你好,我没有在VS上跑过,没有办法给你提供建议,抱歉。 您好 我vs2015上面可以正常运行,在vs2019上面 auto builder=TensorRTUniquePtrnvinfer1::IBuilder(nvinfer1::createInferBuilder(blogger.getTRLogger()); 这一步结果是empty 请问是因为模型和vs2019不兼容吗 您好,请问vs2015您是怎么编译过去的?我编译了半天好多错误 就是提示哪里有问题就增加头文件和库,你要是需要我可以发你一份,不过dll那些太大了 需要你自己去配置  发自我的iPhone ------------------ 原始邮件 ------------------ 发件人: oUp2Uo @.> 发送时间: 2023年11月16日 16:55 收件人: yuefanhao/SuperPoint-SuperGlue-TensorRT @.> 抄送: lxh131419 @.>, Mention @.> 主题: Re: [yuefanhao/SuperPoint-SuperGlue-TensorRT] Vs2019运行提示失败 (Issue #20) 您好 我vs2015上面可以正常运行,在vs2019上面 auto builder=TensorRTUniquePtrnvinfer1::IBuilder(nvinfer1::createInferBuilder(blogger.getTRLogger()); 这一步结果是empty 请问是因为模型和vs2019不兼容吗 您好,请问vs2015您是怎么编译过去的?我编译了半天好多错误 — Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you were mentioned.Message ID: @.***> 我现在最在意的点就是他接你打球。除此以外没有别的。 发自我的iPhone
gharchive/issue
2023-09-19T13:16:57
2025-04-01T06:46:20.119742
{ "authors": [ "lxh131419", "oUp2Uo", "yuefanhao" ], "repo": "yuefanhao/SuperPoint-SuperGlue-TensorRT", "url": "https://github.com/yuefanhao/SuperPoint-SuperGlue-TensorRT/issues/20", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1357672583
🛑 CMS is down In 20c6d4a, CMS ($CMS_SITE) was down: HTTP code: 502 Response time: 2061 ms Resolved: CMS is back up in 72f5528.
gharchive/issue
2022-08-31T17:56:41
2025-04-01T06:46:20.139946
{ "authors": [ "soemarko-yukbid" ], "repo": "yukbid/upptime", "url": "https://github.com/yukbid/upptime/issues/259", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1382202741
清理splitable的日志时报错 What happened: Environment: OS (e.g. cat /etc/os-release): Kernel (e.g. uname -a): https://github.com/yunionio/cloudpods/pull/15032
gharchive/issue
2022-09-22T10:19:32
2025-04-01T06:46:20.159397
{ "authors": [ "swordqiu" ], "repo": "yunionio/cloudpods", "url": "https://github.com/yunionio/cloudpods/issues/15026", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
227739887
disable ARIA warnings let's wait until it becomes a problem Closes #461 Codecov Report Merging #492 into master will not change coverage. The diff coverage is n/a. @@ Coverage Diff @@ ## master #492 +/- ## ======================================= Coverage 98.56% 98.56% ======================================= Files 215 215 Lines 2993 2993 ======================================= Hits 2950 2950 Misses 43 43 Continue to review full report at Codecov. Legend - Click here to learn more Δ = absolute <relative> (impact), ø = not affected, ? = missing data Powered by Codecov. Last update f040a7c...15b465d. Read the comment docs.
gharchive/pull-request
2017-05-10T16:33:21
2025-04-01T06:46:20.166351
{ "authors": [ "codecov-io", "tiltec" ], "repo": "yunity/foodsaving-frontend", "url": "https://github.com/yunity/foodsaving-frontend/pull/492", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
2514683531
🛑 Bitwarden is down In 955544f, Bitwarden (https://bitwarden.yunus.eu.org) was down: HTTP code: 0 Response time: 0 ms Resolved: Bitwarden is back up in 62a43b5 after 13 minutes.
gharchive/issue
2024-09-09T18:54:26
2025-04-01T06:46:20.168908
{ "authors": [ "yunus25jmi1" ], "repo": "yunus25jmi1/uptime-yunusteam", "url": "https://github.com/yunus25jmi1/uptime-yunusteam/issues/10", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2585192840
🛑 Excalidraw is down In af9b628, Excalidraw (https://excalidraw.yunuscloud.eu.org) was down: HTTP code: 0 Response time: 0 ms Resolved: Excalidraw is back up in 0c6716c after 21 minutes.
gharchive/issue
2024-10-14T08:18:48
2025-04-01T06:46:20.171481
{ "authors": [ "yunus25jmi1" ], "repo": "yunus25jmi1/uptime-yunusteam", "url": "https://github.com/yunus25jmi1/uptime-yunusteam/issues/139", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
53384427
Mustache binding doesn't work when starting with an underscore It seems that Vue.js cannot properly parse mustache bindings start with an underscore like: {{_type}}. It works fine with Handlebars.js Could you please add support to this kind of binding? demo: http://jsfiddle.net/maxjiang23326/oc0y33Lx/ That isn't a bug with Vue, it's an intentional feature. From the guide (at the bottom of that section): Under the hood, Vue.js attaches a hidden property ob and recursively converts the object’s enumerable properties into getters and setters to enable dependency collection. Properties with keys that starts with $ or _ are skipped. What this means is that your _type wasn't proxied onto hits[0], but it's still attached to the $data field, so can be accessed as hits[0].$data._type. Here's a fiddle. This is however just a workaround. The idea is that if you've started your variable with an underscore, it's probably a private variable, and so you probably shouldn't be binding the DOM to it. if you're stuck with that data structure, the workaround is fine, but if you can change it, why not remove the underscore since type isn't private anyway? Thanks a lot! Here's another case in which I cannot access '$index', '$key' and '$value' when adding of deleting a property whoes key starts with an underscore. Is there any workaround for this case? Here's a fiddle: http://jsfiddle.net/maxjiang23326/ysd6Laco/1/ Thanks in advance. Hmm I'm not sure why that behaviour is happening. It might actually be a bug. But I think you're making things difficult for yourself in using underscores for properties you're trying to bind to. Actually, using underscores is really not my choice. The data structure is not defined by me and I've found no easy way to change it. In addition, I've found another wired behaviour - if I add another normal property along with adding the property with underscore, it works! Here's the fiddle: http://jsfiddle.net/ysd6Laco/2/ Properties with _ or $ is are not observed for changes and you should use them to store static data that does not change over time. If you need to change the data structure you can do so on the create or beforeCompile hook: var data = { _id: 123 } new Vue({ created: function() { this.$set('id', data._id) } }) @yyx990803 – I'm running into an issue with nested data that contains attributes beginning with "". Preventing observation of properties beginning with "" and "$" makes a lot of sense on View Models but when it applies to nested data that could come from any number of different REST APIs, it starts to be a bit restrictive. Would it be possible to relax this restriction to where the existing rule applies to View Models but for simple nested data only properties beginning with "$" and the "ob" property itself are ignored? @davidkhess They are skipped for observation and proxying, but you can still access them in templates as $data._property. @yyx990803, yes, but if they are skipped for observation then you can't bind to them and expect the UI to update when they are mutated. For instance, let's say I GET a resource via a REST api and store the result in $data.response. Let's also assume that this REST api response happens to contain something like a _related property which is a list of topics and I bind that to a select that allows multiple selection and I render it to the page as a bullet list. If I change the value of _related using the select, the bullet list will never update. This seems to be a bit of a landmine for people working with APIs that happen to use "_" in property names of REST resources. It seems a reasonable rule on Vue objects but less reasonable to apply it to all nested data below it – the naming of which the developer may have no control over. @davidkhess fair enough, sounds like it'd be a good idea to observe them while just skip proxying them at root level. @yyx990803 That sounds like a great solution. +1. I run into the same problem when binding to images which come from couchdb and are stored under _attachements in couchdb. So it would be great if i could just observe them under say objekt._attachments. Right now I copy them into an extra $data.attachments property. Thanks a lot, if you can change the behaviour :) Closed via #b89e7c3
gharchive/issue
2015-01-05T10:51:52
2025-04-01T06:46:20.229452
{ "authors": [ "TMiguelT", "agonbina", "akralj", "davidkhess", "maxjiang23326", "yyx990803" ], "repo": "yyx990803/vue", "url": "https://github.com/yyx990803/vue/issues/665", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
807790181
Use event.wait instead of time.sleep in ReceivedPaymentsSubscriptionClient https://github.com/yzernik/squeaknode/blob/4f1272cc26cb438c9717cf061bd02ee62fd1860d/squeaknode/node/received_payments_subscription_client.py#L79 This issue could cause graceful shutdown to not work, when there is an client subscribed to received payments. https://github.com/yzernik/squeaknode/pull/847
gharchive/issue
2021-02-13T17:23:24
2025-04-01T06:46:20.238179
{ "authors": [ "yzernik" ], "repo": "yzernik/squeaknode", "url": "https://github.com/yzernik/squeaknode/issues/844", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
520707757
请问如何在view视图中判断是否已登录? Laravel Version: #.#.# PHP Version: Laravel-admin: #.#.# Description: Steps To Reproduce: @auth You are logged in. @endauth
gharchive/issue
2019-11-11T02:41:05
2025-04-01T06:46:20.255883
{ "authors": [ "datangkang123", "topegret" ], "repo": "z-song/laravel-admin", "url": "https://github.com/z-song/laravel-admin/issues/4139", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
2254515277
Add sorting options to switcher Last message Last fronted Alphabetical Maybe more? Added in 559d0d2
gharchive/issue
2024-04-20T11:41:23
2025-04-01T06:46:20.257196
{ "authors": [ "z0w13" ], "repo": "z0w13/pkstatus", "url": "https://github.com/z0w13/pkstatus/issues/50", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
269366328
begin createTicket function, create template + controller Hey Ben, I'm now working on the front-end portion of creating tickets. I have everything squared away with the user form and the controller. What I'm stuck on is passing the info over to Rails. Here's my createTicket function in the Service: ZenFactory.createTicket = function(subject, comment, submitter) { var createTicket = { method: 'POST', url: 'http://localhost:3000/api/tickets', data: { subject: subject, comment: comment, submitter: submitter } }; } And here's the method on Rails' end: def create(subject, comment_body, submitter) new_ticket = ZEN_CLIENT.tickets.create(:subject => subject, :comment => { :value => comment_body }, :via => { :source => { :from => { :name => submitter }}}, :requester => { :name => submitter }) end The issue is that when I try and submit the form, I'm getting the following error: "#<ArgumentError: wrong number of arguments (given 0, expected 3)>" So the subject, comment and submitter data aren't being passed through to Rails. I'm thinking that passing the data over successfully will require some updates in the code on both Angular and Rails but not sure how to go about doing this. Are you getting the error in your Rails or your Angular code? That error is coming from Rails side. Everything seems to be working properly on Angular. Hey Ben, just a heads up that this is fixed! My mentor at work and I figured it out. Everything was correct on the Angular side. I just had to update the create method in Rails by adding params. Perfect!
gharchive/pull-request
2017-10-29T02:47:29
2025-04-01T06:46:20.285432
{ "authors": [ "bmneely", "zacharyehren" ], "repo": "zacharyehren/capstone", "url": "https://github.com/zacharyehren/capstone/pull/7", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2089839008
Should Apple's "Optimized Battery Charging" be disabled before installing bclm? I think the answer is "Yes", but I'm asking here to be certain. Here's the control I'm talking about in case there's any doubt. IOW, should the switch for Optimized Battery Charging be as shown here - or should it be switched OFF? crickets... :) If that feature utilizes the CHWA key then it would need to be disabled. I'm not sure since it is undocumented and I did not personally reverse engineer the firmware. Considering nobody has reported otherwise since the Apple Silicon release, I'm going to assume it doesn't matter. Feel free to report back if you find otherwise. I've no idea whether it uses the CHWA key or not, but I'm reluctant to allow it to run alongside bclm - esp. since we don't know what the effect might be. But thanks for following up.
gharchive/issue
2024-01-19T07:37:27
2025-04-01T06:46:20.289228
{ "authors": [ "seamusdemora", "zackelia" ], "repo": "zackelia/bclm", "url": "https://github.com/zackelia/bclm/issues/32", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
66988007
reuse existing terminal window if one exists No need to create a brand-new window for one shell. Brackets Git updated to version 0.14.22 (both brackets registry & npm) @macbookandrew this was one amazing fix! thanks a lot :-) @zaggino Thanks. It was really starting to bug me and I realized there was no good reason need another window open just for one shell.
gharchive/pull-request
2015-04-07T20:52:45
2025-04-01T06:46:20.340920
{ "authors": [ "macbookandrew", "zaggino" ], "repo": "zaggino/brackets-git", "url": "https://github.com/zaggino/brackets-git/pull/990", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
465952097
Consistent width and padding for separate dial code option I think this is a better approach for this issue, as it keeps everything in place and consistently spaced across different dial code lengths. What do you think? I think it is not ideal. Too much space in between the dial code and the phone number specially if the user selected dial code with 1-2 digit.
gharchive/pull-request
2019-07-09T19:39:59
2025-04-01T06:46:20.345914
{ "authors": [ "pasevin", "zaizac" ], "repo": "zaizac/ngx-intl-tel-input", "url": "https://github.com/zaizac/ngx-intl-tel-input/pull/1", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
204843078
Preinstall dependencies before lerna bootstrap Lerna bootstrap does not install peerDependencies within packages. See https://github.com/lerna/lerna/issues/375 and https://github.com/lerna/lerna/issues/371. The workaround for now is to use yarn to install dependencies in each package and then linklocal for linking local dependencies (UPDATE) with yarn with npm and then run lerna bootstrap. Eventually lerna should either support peerDependencies or be merged into yarn. Changes Unknown when pulling f514904aa53853a8312dcee018ae8ad8d30a91a7 on fix-lerna-bootstrap into ** on master**. Changes Unknown when pulling b7500777780f62560a3397c3a5b4233a690cfb40 on fix-lerna-bootstrap into ** on master**. 👍 👍
gharchive/pull-request
2017-02-02T11:06:30
2025-04-01T06:46:20.410744
{ "authors": [ "coveralls", "danpersa", "mfellner", "semonte" ], "repo": "zalando-incubator/tessellate", "url": "https://github.com/zalando-incubator/tessellate/pull/54", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
222775853
web-ui: OAuth show an error to the user if firewall fails As A user I Want to be notified if something went wrong during login So That I can be aware of the root cause moved to internal repo
gharchive/issue
2017-04-19T15:23:44
2025-04-01T06:46:20.412204
{ "authors": [ "rbarilani" ], "repo": "zalando-incubator/zally", "url": "https://github.com/zalando-incubator/zally/issues/294", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
153968049
#426 add documentation fixes #426 [Current coverage][cc-pull] is 52.19% Merging [#427][cc-pull] into [master][cc-base-branch] will increase coverage by +5.74% 2 files (not in diff) in ...o/stups/fullstop/web were modified. more Partials -6 Hits +6 2 files (not in diff) in ...lando/stups/fullstop were modified. more Misses -5 Partials -2 Hits +7 2 files (not in diff) in ...tion/repository/impl were modified. more Misses -1 Partials -20 Hits +21 5 files (not in diff) in ...top/violation/entity were modified. more Misses -14 Partials -5 Hits +19 1 files (not in diff) in ...s/fullstop/violation were modified. more Partials -4 Hits +4 2 files (not in diff) in ...tups/fullstop/domain were modified. more Misses -2 Hits +2 2 files (not in diff) in ...lando/stups/fullstop were modified. more Misses -1 Partials -2 Hits +3 2 files (not in diff) in ...ugin/unapproved/impl were modified. more Misses -1 Partials -2 Hits +3 2 files (not in diff) in ...op/plugin/unapproved were modified. more Partials -8 Hits +8 2 files (not in diff) in ...tups/fullstop/plugin were modified. more Misses -1 Partials -4 Hits +5 @@ master #427 diff @@ ========================================== Files 133 133 Lines 3100 3100 Methods 0 0 Messages 0 0 Branches 256 256 ========================================== + Hits 1440 1618 +178 + Misses 1590 1482 -108 + Partials 70 0 -70 Powered by Codecov. Last updated by [c1344ba...715aa95][cc-compare] [cc-base-branch]: https://codecov.io/gh/zalando-stups/fullstop/branch/master?src=pr [cc-compare]: https://codecov.io/gh/zalando-stups/fullstop/compare/c1344baf4dd661e8fecedcf60ea27594c97facb2...715aa95f37c114b349dc58747e613153216be177 [cc-pull]: https://codecov.io/gh/zalando-stups/fullstop/pull/427?src=pr 👍
gharchive/pull-request
2016-05-10T09:58:45
2025-04-01T06:46:20.429531
{ "authors": [ "Gregsen", "codecov-io", "mrandi" ], "repo": "zalando-stups/fullstop", "url": "https://github.com/zalando-stups/fullstop/pull/427", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
219551048
Ignore context args Fixes #428. Allow using handler functions without using user or token_info args (set by security decorator). Coverage remained the same at 100.0% when pulling 0dbaefbfc420e63c17505770b1a0c6262f50f088 on ignore-context-args into 1858c9d4a02084b5cae8f72e4d2273a1157389a6 on master. Coverage remained the same at 100.0% when pulling 0dbaefbfc420e63c17505770b1a0c6262f50f088 on ignore-context-args into 1858c9d4a02084b5cae8f72e4d2273a1157389a6 on master. Coverage remained the same at 100.0% when pulling 0dbaefbfc420e63c17505770b1a0c6262f50f088 on ignore-context-args into 1858c9d4a02084b5cae8f72e4d2273a1157389a6 on master. Coverage remained the same at 100.0% when pulling ceca0202313f4b86a62cf2caae7568e9989988ef on ignore-context-args into 1858c9d4a02084b5cae8f72e4d2273a1157389a6 on master. Coverage remained the same at 100.0% when pulling ceca0202313f4b86a62cf2caae7568e9989988ef on ignore-context-args into 1858c9d4a02084b5cae8f72e4d2273a1157389a6 on master. Coverage remained the same at 100.0% when pulling ceca0202313f4b86a62cf2caae7568e9989988ef on ignore-context-args into 1858c9d4a02084b5cae8f72e4d2273a1157389a6 on master. 👍 @rafaelcaricio can you release 1.1.8?
gharchive/pull-request
2017-04-05T11:25:00
2025-04-01T06:46:20.439648
{ "authors": [ "coveralls", "hjacobs", "rafaelcaricio" ], "repo": "zalando/connexion", "url": "https://github.com/zalando/connexion/pull/429", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
170590965
Tests fail under headless Centos7 Hello, I have downloaded and tried this package under Centos7 (headless server) but the tests failed, any idea ? [vagrant@localhost go-keyring]$ go test --- FAIL: TestSet (0.04s) keyring_test.go:15: Should not fail, got: The name org.freedesktop.secrets was not provided by any .service files --- FAIL: TestGet (0.00s) keyring_test.go:23: Should not fail, got: The name org.freedesktop.secrets was not provided by any .service files keyring_test.go:28: Should not fail, got: The name org.freedesktop.secrets was not provided by any .service files keyring_test.go:32: Expected password test-password, got --- FAIL: TestGetNonExisting (0.00s) keyring_test.go:40: Expected error ErrNotFound, got The name org.freedesktop.secrets was not provided by any .service files --- FAIL: TestDelete (0.00s) keyring_test.go:48: Should not fail, got: The name org.freedesktop.secrets was not provided by any .service files FAIL exit status 1 FAIL github.com/zalando/go-keyring 0.043s Thanks :) It looks like you don't have gnome-keyring installed. But even if you do it might not work because gnome-keyring will ask you to unlock the keychain by showing a gtk3 dialogbox, which isn't possible on a headless server. We also have this issue on travis and still haven't found a way to solve it #1. Test should run fine on a desktop with gnome-keyring installed and configured though. I do not think that it is related to gnome-keyring [vagrant@localhost ~]$ sudo yum list | grep gnome-keyring Repository sysdev-main is listed more than once in the configuration gnome-keyring.x86_64 3.14.0-1.el7 @centos libgnome-keyring.x86_64 3.8.0-3.el7 @centos gnome-keyring.i686 3.14.0-1.el7 centos gnome-keyring-pam.i686 3.14.0-1.el7 centos gnome-keyring-pam.x86_64 3.14.0-1.el7 centos libgnome-keyring.i686 3.8.0-3.el7 centos libgnome-keyring-devel.i686 3.8.0-3.el7 centos libgnome-keyring-devel.x86_64 3.8.0-3.el7 centos [vagrant@localhost ~]$ My second guess would be that the dbus session isn't started when you are running headless. (It's usually started by your login manager [1]). You could try to run the tests like this: $ dbus-launch go test [1] https://dbus.freedesktop.org/doc/dbus-launch.1.html Thanks for your help, I have tried also this and the result remains the same. [vagrant@localhost go-keyring]$ sudo systemctl status dbus ● dbus.service - D-Bus System Message Bus Loaded: loaded (/usr/lib/systemd/system/dbus.service; static; vendor preset: disabled) Active: active (running) since jeu. 2016-08-11 15:31:56 UTC; 6h ago Main PID: 597 (dbus-daemon) CGroup: /system.slice/dbus.service └─597 /bin/dbus-daemon --system --address=systemd: --nofork --nopidfile --systemd-activation août 11 20:11:11 localhost.localdomain dbus[597]: [system] Successfully activated service 'org.freedesktop.nm_dispatcher' août 11 20:11:11 localhost.localdomain dbus-daemon[597]: dbus[597]: [system] Successfully activated service 'org.freedesktop.nm_dispatcher' août 11 20:11:12 localhost.localdomain dbus[597]: [system] Activating via systemd: service name='org.freedesktop.machine1' unit='dbus-org.freedes...service' août 11 20:11:12 localhost.localdomain dbus-daemon[597]: dbus[597]: [system] Activating via systemd: service name='org.freedesktop.machine1' unit=...ervice' août 11 20:11:12 localhost.localdomain dbus[597]: [system] Successfully activated service 'org.freedesktop.machine1' août 11 20:11:12 localhost.localdomain dbus-daemon[597]: dbus[597]: [system] Successfully activated service 'org.freedesktop.machine1' août 11 21:04:04 localhost.localdomain dbus[597]: [system] Activating via systemd: service name='org.freedesktop.nm_dispatcher' unit='dbus-org.fr...service' août 11 21:04:04 localhost.localdomain dbus-daemon[597]: dbus[597]: [system] Activating via systemd: service name='org.freedesktop.nm_dispatcher' ...ervice' août 11 21:04:05 localhost.localdomain dbus[597]: [system] Successfully activated service 'org.freedesktop.nm_dispatcher' août 11 21:04:05 localhost.localdomain dbus-daemon[597]: dbus[597]: [system] Successfully activated service 'org.freedesktop.nm_dispatcher' Hint: Some lines were ellipsized, use -l to show in full. [vagrant@localhost go-keyring]$ dbus-launch go test --- FAIL: TestSet (0.00s) keyring_test.go:15: Should not fail, got: The name org.freedesktop.secrets was not provided by any .service files --- FAIL: TestGet (0.00s) keyring_test.go:23: Should not fail, got: The name org.freedesktop.secrets was not provided by any .service files keyring_test.go:28: Should not fail, got: The name org.freedesktop.secrets was not provided by any .service files keyring_test.go:32: Expected password test-password, got --- FAIL: TestGetNonExisting (0.00s) keyring_test.go:40: Expected error ErrNotFound, got The name org.freedesktop.secrets was not provided by any .service files --- FAIL: TestDelete (0.00s) keyring_test.go:48: Should not fail, got: The name org.freedesktop.secrets was not provided by any .service files FAIL exit status 1 FAIL github.com/zalando/go-keyring 0.006s It would be great if we could make this library working with this case. Python manages its keyring lib with a dedicated dbus package. Python DBUS : https://pypi.python.org/pypi/dbus-python/1.2.4 Python Keyring (works on headless Linux) : https://pypi.python.org/pypi/keyring Your log shows the system bus being active. dbus-launch should start a session bus but maybe it doesn't do it here since it still doesn't find the service. Does the python keyring lib work correctly on the same setup with the secret service backend? IIRC the python keyring library will fallback to a file based storage if the secret service (dbus) interface isn't available, so make sure that isn't happening. Hi @ahmed-bacha, do you need anything else on this topic? I'll close it for now, but ping back if you still have questions. cc @mikkeloscar
gharchive/issue
2016-08-11T08:07:35
2025-04-01T06:46:20.446568
{ "authors": [ "LappleApple", "ahmed-bacha", "mikkeloscar" ], "repo": "zalando/go-keyring", "url": "https://github.com/zalando/go-keyring/issues/8", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
461906747
Add note about Spring Boot Starter dependency to README Description The README should mention to use the Spring Boot Starter dependency instead of the core dependency when working with Spring Boot. Motivation and Context To combat issues like this one https://github.com/zalando/logbook/issues/218#issuecomment-506634294 Types of changes [ ] Bug fix (non-breaking change which fixes an issue) [ ] New feature (non-breaking change which adds functionality) [ ] Breaking change (fix or feature that would cause existing functionality to change) [x] Documentation Checklist: [x] My change requires a change to the documentation. [x] I have updated the documentation accordingly. [ ] I have added tests to cover my changes. :+1: Thanks for the contribution!
gharchive/pull-request
2019-06-28T07:53:54
2025-04-01T06:46:20.450490
{ "authors": [ "marcelstoer", "whiskeysierra" ], "repo": "zalando/logbook", "url": "https://github.com/zalando/logbook/pull/545", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
277073784
Include demo SPA repos in README The examples provided by default through Tailor are great to get started, but I think linking directly to the SPA examples in the README would be helpful to those looking to dive a bit deeper into more complex and practical usage of Tailor (similar to the techniques used by Zalando in production). The second of the two example SPAs was built in response to a dependency handling issue (#196) Thanks @tsnolan23 for the PR :) 👍 :+1: @tsnolan23, @vigneshshanmugam how would one put together a SPA with the use of fragments? This is not very clear as SPAs navigate via routes and the HTML is rendered on the front-end not on the server. I hope you see my dilemma, if I change routes within a SPA and want to navigate to another fragment how would I execute that?
gharchive/pull-request
2017-11-27T16:08:29
2025-04-01T06:46:20.478803
{ "authors": [ "DeTeam", "stevoPerisic", "tsnolan23", "vigneshshanmugam" ], "repo": "zalando/tailor", "url": "https://github.com/zalando/tailor/pull/200", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
812598309
[Migrated] Fix the naming of event rule target id Originally from: https://github.com/Miserlou/Zappa/issues/1546 by jaykay Description This is a fix for #1545 GitHub Issues #1545 not enough info. closing.
gharchive/issue
2021-02-20T12:26:22
2025-04-01T06:46:20.549279
{ "authors": [ "jneves", "monkut" ], "repo": "zappa/Zappa", "url": "https://github.com/zappa/Zappa/issues/596", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2709777003
Add support for adaptive sampling Motivation For any set of n items, there are O(n^2) = n(n-1)/2 possible pairs. As n grows, the number of pairs grows quadratically, creating a large space over which we must allocate limited voter attention. Without supporting solutions, power ranking becomes infeasible for high n. One possible solution is to implement a process of "adaptive sampling" by which pairs are shown not randomly from the entire set, but adaptively based on knowledge of prior votes and where voter attention will be the highest leverage. To do this, we calculate an "confidence" for every pair, indicating how confident we are about which of the two pairs is preferred. When showing pairs to users, we select pairs with low confidence scores first. Through this process, we are continuously maximizing average confidence among all pairs ("maximizing certainty") with a minimum of voter attention. Implementation We will define a "confidence score" as follows: function confidence(a, b) { const max = Math.max(a, b); const skew = max / (a + b); return max * skew; } This function has the following properties: Pairs with more total votes will have a higher confidence Pairs with a higher skew will have a higher confidence We can calculate the confidence for any pair in O(1) time, and draw pairs randomly with a probability based on 1 / confidence, this preferring pairs for which we are less confident. Maybe it should be the entropy of the Beta distribution, since the Beta distribution would represent our beliefs about the fraction of users that prefer one option. The prior could be a Uniform prior or better one of the other priors described in this paper, such as Beta(0.5, 0.5). Then again, I bet the formula produces similar results, and is simpler! Hmm, so it occurs to me is that we want to prioritize sampling pairs that minimize overall uncertainty about the final scores. Since you are using an Eigenvector approach (transitive preferences), then if you have a lot of certainty about (A,B), and a lot of certainty about (B,C), but no data for (A,C), we still still be fairly certain about your score (A,C). Alternatively, suppose we have a lot of certainty about (A,B) and a lot for (C,D), but little for (B,C), then it seems getting a sample for (B,C) has very high information value because it not only directly increases uncertainty for (B,C), it indirectly decreases uncertainty about (A,B), (A,D), and (B,C). That's an interesting point -- a pair reduces uncertainty not only for itself, but potentially for the pairs around it. @johnwarden you are exactly right that the entropy of the posterior Beta distribution would be the best choice, as it most directly models the quantity we are about -- the uncertainty of our data. That said, calculating the entropy for a Beta distribution is not trivial, involving numerical approximations of the digamma function, etc. As an alternative, we could use the variance of the posterior Beta. This also points us towards the uncertainty of our data, and is much simpler to calculate. It's not quite as correct of a model in general, but in the case of a Beta distribution over (0,1) I think it works out to basically the same thing. The implementation would be: (a * b) / ((a + b + 1) * (a + b) ** 2); I was curious about this, so I had ChatGPT plot comparative diagrams of entropy and variance for a Beta distribution. Seems for the simple bivariate case of the Beta distribution, they reflect very similar underlying relationships.
gharchive/issue
2024-12-01T20:21:20
2025-04-01T06:46:20.595457
{ "authors": [ "johnwarden", "kronosapiens" ], "repo": "zaratanDotWorld/powerRanker", "url": "https://github.com/zaratanDotWorld/powerRanker/issues/2", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2614154645
Allow ZarfInitConfig to be versioned outside of being tied to binary version Is your feature request related to a problem? Please describe. So we are running RKE2 in an air-gapped environment, so I'm having to write a custom zarf-init that is responsible for installing rook-ceph into the cluster... so because the init package version is tied to the binary used by the pipeline, when we upgrade ceph from 1.15.3 -> 1.15.4 we would need to override the version we already have published... Describe the solution you'd like Given when .metadata.version is defined in a ZarfInitConfig Package Then use that as the package version for the init package Else default to using the binary version if not present Additional context zarf init would also need to accept an option path argument to allow passing in a different zarf-init package, but still have the default behavior to search for zarf-init-<arch>-<version>.tar.zst I think the problem is still there. I'd like to version the init package too, I'm having the same problem that you correctly described. I'm using zarf v0.43.1 and if the package is not named as zarf-init-<arch>-<version>.tar.zst, zarf will try to download it, no matter what. Here we call the GetInitPackageName func: src/cmd/initialize.go And the GetInitPackageName function will always return something related to CLI version https://github.com/zarf-dev/zarf/blob/09c41a13112727cc08a2289ead4b16eb895587f0/src/pkg/packager/sources/utils.go#L157 If we want to have the init package version in the name of the file that relates to the ZarfInitPackage metadata.version, I think that we can fix it with pkg.Metadata.Version. Something like: // GetInitPackageName returns the formatted name of the init package. func GetInitPackageName() string { // No package has been loaded yet so lookup GetArch() with no package info arch := config.GetArch() if pkg.Metadata.Version == "" { return fmt.Sprintf("zarf-init-%s-%s.tar.zst", arch, config.CLIVersion) } else { return fmt.Sprintf("zarf-init-%s-%s.tar.zst", arch, pkg.Metadata.Version) } } This way we might feed any "custom versioned" init package to zarf init command. Let me know if I've missed anything, I'm open to discussion. Thanks for any feedback! So it is possible to deploy a zarf init package by using the zarf package deploy. This still does all the checks for creating the zarf state and what not Yea, I think it got lost when I did force pushes to my branch; but Austin gave very similar advice so I'm more than happy to pass it along!
gharchive/issue
2024-10-25T13:44:31
2025-04-01T06:46:20.604532
{ "authors": [ "a1994sc", "hawk87" ], "repo": "zarf-dev/zarf", "url": "https://github.com/zarf-dev/zarf/issues/3148", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
214209533
Mark do as a keyword and do catch as a control keyword. The feature do catch has been published to Rust's repository. Fixes #100 @cramertj, Will do catch be stabilized? No-- do catch is (most likely) just a stand in while we work out the stabilization process for catch { ... }. @cramertj, How long will it be with us? It's hard to tell exactly, but changing it is blocked on removing catch as a valid struct name (see here). I'd imagine it's at least a version or two away, since I think the current consensus is that such a change would need at least a warning cycle. Okay. At least six weeks is a long time. I am going to merge it and when it changes, I will open another PR. @zargony, What do you think? Darn, I didn't get any notification of your ping again. Github doesn't like me anymore. Sorry for not catching this :-( I didn't follow do catch development in Rust. How did it turn out? @zargony, Nothing interesting so far.
gharchive/pull-request
2017-03-14T21:17:49
2025-04-01T06:46:20.609554
{ "authors": [ "KalitaAlexey", "cramertj", "zargony" ], "repo": "zargony/atom-language-rust", "url": "https://github.com/zargony/atom-language-rust/pull/102", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
2153096336
Add the ability to read the phase counter statuses, and write software to plot them. Currently, debugging a failing test is very difficult. Having the ability to see the evolution of the phase counters over time would be really useful. Preliminary tests showed that plotting this data was not useful.
gharchive/issue
2024-02-26T01:56:31
2025-04-01T06:46:20.641142
{ "authors": [ "zbelateche" ], "repo": "zbelateche/digial-ising", "url": "https://github.com/zbelateche/digial-ising/issues/8", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
536788241
Convert mel to audio Hi, is there any thing I can do to convert the mel spectrogram back to audio? I am using librosa to covert it but it takes ages just to reform 8-10s audio. Thanks Sorry @juunnn I don't have a great answer. Some people use GANs to produce audio from spectrograms: https://github.com/paarthneekhara/advoc
gharchive/issue
2019-12-12T06:44:34
2025-04-01T06:46:20.652167
{ "authors": [ "juunnn", "zcaceres" ], "repo": "zcaceres/spec_augment", "url": "https://github.com/zcaceres/spec_augment/issues/10", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1859701713
Core BTS UI: Transactions Screen Update the transactions listing screen to conform to https://www.figma.com/file/vYQD6uZtkPgtmgJ3zfauq0/Zashi-UI-082123-V3?type=design&node-id=148%3A7956&mode=design&t=MuSYkgngwcFg2nNl-1 current version expected version case paid(success: Bool) case received case failed case sending case receiving There are 5 possible states the transactions can end up with, so have this in mind send to <address> is possible and such row is just about restyling and updating the layout received (notice received to typo) at the moment is without the address, we don't have such information so write only received and don't bother with from In similar way update sending to and receiving states failed transaction lacks the design, @nuttycom lastly, the balance is not present in the current implementation of this screen so it must be added, please follow the same principle present in Home (Features/Home) or Send (Features/SendFlow) screens, both implements the logic how to listen to the balance updates, render shielded balance I'm closing this issue temporarily; there is work underway to move transaction history to the home screen in the Figma designs.
gharchive/issue
2023-08-21T16:08:40
2025-04-01T06:46:20.676205
{ "authors": [ "LukasKorba", "nuttycom" ], "repo": "zcash/secant-ios-wallet", "url": "https://github.com/zcash/secant-ios-wallet/issues/809", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
163505678
Completion of non standard package? I have code with external package import: package main import "gopl.io/ch4/github" func main() { github.| } But there is no completion appears after I type github.| (pipe is cursor position). Is it supposed to complete external packages at all? At the moment it does not. If yes, could you please advise, what needs to be done in order to enable external package completion? Here is my config: let g:deoplete#enable_at_startup = 1 let g:deoplete#enable_smart_case = 1 call deoplete#custom#set('_', 'matchers', ['matcher_full_fuzzy']) let g:deoplete#sources#go#gocode_binary = $GOPATH.'/bin/gocode' let g:deoplete#sources#go#use_cache = 1 let g:deoplete#sources#go#json_directory = $HOME.'/.config/nvim/plugged/deoplete-go/data/json/1.6.2/darwin_amd64' me too. `let g:deoplete#enable_at_startup = 1 let g:deoplete#sources#go#gocode_binary = '/Users/kevin/.go/bin/gocode' let g:deoplete#sources#go#package_dot = 1 let g:deoplete#sources#go#use_cache = 1 let g:deoplete#sources#go#json_directory = '/Users/kevin/.cache/deoplete/go/darwin_amd64' ` @vbauerster @weghst Hi, thanks issue :) Maybe, this problem caused by gocode original behavior. (and not issue, it's gocode normal behavior.) If there import *** word in your code, gocode will parse ***.a file such as $GOPATH/pkg/darwin_amd64/gopl.io/ch4/github.a So, If you want to complete of gopl.io/ch4/github method, needs go install gopl.io/ch4/github. I was tested that flow, and it will work :) Could you try it? @zchee You are right, a custom package needs to be installed first, before it can be referenced by gocode. So after I run go install gopl.io/ch4/github, completion of the package started to work properly. Thanks, for you valuable note! I assume the issue may be closed now) Closed.
gharchive/issue
2016-07-02T06:57:00
2025-04-01T06:46:20.694887
{ "authors": [ "Shougo", "vbauerster", "weghst", "zchee" ], "repo": "zchee/deoplete-go", "url": "https://github.com/zchee/deoplete-go/issues/61", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
1837549505
System.NotSupportedException: 'The type 'System.Collections.Generic.List` There is an issue while trying to deserialize List, IList - while IReadOnlyList still works. Any idea on how to cater for these cases? Thank you. var objectTest = new FooModel() { FooInner = new List<FooInnerModel>() { new FooInnerModel(){ FooInnerMember = 1}, new FooInnerModel() { FooInnerMember = 2 } } }; var options = new System.Text.Json.JsonSerializerOptions() { TypeInfoResolver = DataContractResolver.Default, DefaultIgnoreCondition = System.Text.Json.Serialization.JsonIgnoreCondition.WhenWritingNull, PropertyNameCaseInsensitive = true, IncludeFields = true }; var test = System.Text.Json.JsonSerializer.Serialize(objectTest, options); var deserialized = System.Text.Json.JsonSerializer.Deserialize<FooModel>(test, options); [DataContract] public class FooModel { [DataMember(Name = "foo_inner")] public List<FooInnerModel> FooInner { get; set; } } [DataContract] public class FooInnerModel { [DataMember(Name = "foo_inner_member")] public int FooInnerMember { get; set; } } Fixed #28 and published v1.0.2 version to nuget.org, unfortunately that won't work for types that require constructor params if you change the List to array above instead you will get an exception on the latest version #30 might be a better solution. Originally the resolver was based on DefaultJsonTypeInfoResolver which for some reason I decided not to use, however, that base resolver has support for the default behaviour, Let me know if you have a test case I should try to add before I release a new package. Sorry for not getting back to you earlier, initial tests are positive on my end. Let me spend some more time going through other test-cases and I will let you know if I run into any issues. Thanks again for the prompt fixes. No worries at all. Thanks for your feedback and help!
gharchive/issue
2023-08-05T03:24:28
2025-04-01T06:46:20.698291
{ "authors": [ "vstadn", "zcsizmadia" ], "repo": "zcsizmadia/ZCS.DataContractResolver", "url": "https://github.com/zcsizmadia/ZCS.DataContractResolver/issues/27", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1846929501
Release Code Thanks for your great work! When will you release the code of "DDT: A Diffusion-Driven Transformer-based Framework for Human Mesh Recovery from a Video"? Thank you for your interest, DDT is still in revision, the code will be released once the paper gets accepted.
gharchive/issue
2023-08-11T14:30:18
2025-04-01T06:46:20.699743
{ "authors": [ "flyyyyer", "zczcwh" ], "repo": "zczcwh/POSTER", "url": "https://github.com/zczcwh/POSTER/issues/5", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1062961962
searchContext is null for ExtendedWebElement which use format method when EWE is used using format method we don't init important searchContext, for example: private static final String ITEM_XPATH_FORMAT = "//td[@class='menuTitleField']/div[text()='%s']"; ... @FindBy(xpath = ITEM_XPATH_FORMAT) private ExtendedWebElement menuItem; ... menuItem.format(LOGOUT.getCaption()).click(); as result we use whole getDriver() to look for element when it is much better to use exact serachContext. Let's examine use-case with format and fix it. 01:17:51 2021-11-24 22:17:51 ExtendedWebElement [main-1] [WARN] this.searchContext is null! 01:17:51 java.lang.RuntimeException: this.searchContext is null! 01:17:51 at com.qaprosoft.carina.core.foundation.webdriver.decorator.ExtendedWebElement.findElement(ExtendedWebElement.java:367) 01:17:51 at com.qaprosoft.carina.core.foundation.webdriver.decorator.ExtendedWebElement.getElement(ExtendedWebElement.java:264) 01:17:51 at com.qaprosoft.carina.core.foundation.webdriver.decorator.ExtendedWebElement.doAction(ExtendedWebElement.java:1341) 01:17:51 at com.qaprosoft.carina.core.foundation.webdriver.decorator.ExtendedWebElement.doAction(ExtendedWebElement.java:1320) 01:17:51 at com.qaprosoft.carina.core.foundation.webdriver.decorator.ExtendedWebElement.click(ExtendedWebElement.java:485) 01:17:51 at com.qaprosoft.carina.core.foundation.webdriver.decorator.ExtendedWebElement.click(ExtendedWebElement.java:471) 01:17:51 at com.qaprosoft.carina.core.foundation.webdriver.decorator.ExtendedWebElement.click(ExtendedWebElement.java:462) 01:17:51 at com.qaprosoft...MainMenu.openItem(MainMenu.java:138) 01:17:51 at com.qaprosoft...(MainMenu.java:87) potential fix: https://github.com/zebrunner/carina/commit/bfd3bbb9a7f6ac37737ffbc58b2890079fb1ad24 format is fixed by adding new constructor for EWE. as of now we have to review one more use-case with findExtendedWebElement where 91st line is: findExtendedWebElement(By.xpath(String.format(CELL_XPATH_FORMAT, i, 3))) 02:10:40 java.lang.RuntimeException: this.searchContext is null! 02:10:40 at com.qaprosoft.carina.core.foundation.webdriver.decorator.ExtendedWebElement.findElement(ExtendedWebElement.java:375) 02:10:40 at com.qaprosoft.carina.core.foundation.webdriver.decorator.ExtendedWebElement.getElement(ExtendedWebElement.java:271) 02:10:40 at com.qaprosoft.carina.core.foundation.webdriver.decorator.ExtendedWebElement.doAction(ExtendedWebElement.java:1349) 02:10:40 at com.qaprosoft.carina.core.foundation.webdriver.decorator.ExtendedWebElement.doAction(ExtendedWebElement.java:1328) 02:10:40 at com.qaprosoft.carina.core.foundation.webdriver.decorator.ExtendedWebElement.getText(ExtendedWebElement.java:435) 02:10:40 at com.qaprosoft....DetailsPage.getAnalogPoints(DetailsPage.java:91) one more use-case in carina-demo: 23:22:27 java.lang.RuntimeException: this.searchContext is null! 23:22:27 at com.qaprosoft.carina.core.foundation.webdriver.decorator.ExtendedWebElement.findElement(ExtendedWebElement.java:356) 23:22:27 at com.qaprosoft.carina.core.foundation.webdriver.decorator.ExtendedWebElement.getElement(ExtendedWebElement.java:271) 23:22:27 at com.qaprosoft.carina.core.foundation.webdriver.decorator.ExtendedWebElement.doAction(ExtendedWebElement.java:1330) 23:22:27 at com.qaprosoft.carina.core.foundation.webdriver.decorator.ExtendedWebElement.doAction(ExtendedWebElement.java:1309) 23:22:27 at com.qaprosoft.carina.core.foundation.webdriver.decorator.ExtendedWebElement.getText(ExtendedWebElement.java:416) 23:22:27 at com.qaprosoft.carina.demo.gui.pages.CompareModelsPage.compareModels(CompareModelsPage.java:59) 23:22:27 at com.qaprosoft.carina.demo.WebSampleTest.testCompareModels(WebSampleTest.java:89) 23:22:27 at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 23:22:27 at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 23:22:27 at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) 23:22:27 at java.base/java.lang.reflect.Method.invoke(Method.java:566) 23:22:27 at org.testng.internal.MethodInvocationHelper.invokeMethod(MethodInvocationHelper.java:133) 23:22:27 at org.testng.internal.TestInvoker.invokeMethod(TestInvoker.java:598) 23:22:27 at org.testng.internal.TestInvoker.invokeTestMethod(TestInvoker.java:173) 23:22:27 at org.testng.internal.MethodRunner.runInSequence(MethodRunner.java:46) 23:22:27 at org.testng.internal.TestInvoker$MethodInvocationAgent.invoke(TestInvoker.java:824) 23:22:27 at org.testng.internal.TestInvoker.invokeTestMethods(TestInvoker.java:146) 23:22:27 at org.testng.internal.TestMethodWorker.invokeTestMethods(TestMethodWorker.java:146) 23:22:27 at org.testng.internal.TestMethodWorker.run(TestMethodWorker.java:128) 23:22:27 at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) 23:22:27 at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) 23:22:27 at java.base/java.lang.Thread.run(Thread.java:834) it was decided to fix only constructors where we provide element and as result could get searchContext. As we still going to continue to support DriverHelper.findExtendedWebElement(s) methods I removed try/catch in ExtendedWebElement->findElement method. findElement might discover item(s) both by searchContext and getDriver() Will be verified later verified on my own, temporary places where exception was added removed as of now
gharchive/issue
2021-11-24T22:25:41
2025-04-01T06:46:20.745058
{ "authors": [ "okamara", "vdelendik" ], "repo": "zebrunner/carina", "url": "https://github.com/zebrunner/carina/issues/1536", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2474863272
Go to Definition Misbehaves for Imported Variables in Svelte Files I'm encountering an issue with the "Go to Definition" feature in Svelte files. When I try to go to the definition of a variable that is imported from another file, instead of navigating to the variable's actual definition, the feature takes me to the point in the file where the variable is imported. This behavior occurs consistently and disrupts my workflow, as I need to manually locate the original definition. Steps to Reproduce: Import a variable from another file into a Svelte component. Use the "Go to Definition" feature on the imported variable. Observe that the feature takes you to the import statement instead of the actual definition. Expected Behavior: The "Go to Definition" feature should navigate directly to the location in the original file where the imported variable is defined. Actual Behavior: The "Go to Definition" feature takes you to the import statement rather than the variable's definition. Environment: Zed version: Zed Dev 0.151.0 Extension version: 0.0.3 Svelte version: 3.59.2 OS: Microsoft Windows 11 Home 10.0.22631 22631 Go to definition has completely stopped working for me in .svelte files in this most release of Zed 0.149.3. This appears to no longer be an issue for me Just installed Zed for the first time, added the svelte extension and hitting F12 or rightclick Go to Definition does nothing. Not on the component name or on the import. have you added the following to your tsconfig.json compilerOptions.plugins and run npm i typescript-svelte-plugin @olafurw "plugins": [ { "name": "typescript-svelte-plugin", // the following options can be set additionally; they are optional; their default values are listed here "enabled": true, // enables this plugin "assumeIsSvelteProject": false // if true, skip detection and always assume it's a Svelte project } ]
gharchive/issue
2024-08-20T06:56:28
2025-04-01T06:46:20.753657
{ "authors": [ "AlbertMarashi", "Kae7in", "luis11011", "olafurw" ], "repo": "zed-extensions/svelte", "url": "https://github.com/zed-extensions/svelte/issues/3", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1656901592
Language server error: TypeScript Check for existing issues [X] Completed Describe the bug / provide steps to reproduce it Zed fail to download typescript server Failed to execute npm info: stdout: "\nUsage: npm \n\nwhere is one of:\n access, adduser, audit, bin, bugs, c, cache, ci, cit,\n clean-install, clean-install-test, completion, config,\n create, ddp, dedupe, deprecate, dist-tag, docs, doctor,\n edit, explore, fund, get, help, help-search, hook, i, init,\n install, install-ci-test, install-test, it, link, list, ln,\n login, logout, ls, org, outdated, owner, pack, ping, prefix,\n profile, prune, publish, rb, rebuild, repo, restart, root,\n run, run-script, s, se, search, set, shrinkwrap, star,\n stars, start, stop, t, team, test, token, tst, un,\n uninstall, unpublish, unstar, up, update, v, version, view,\n whoami\n\nnpm -h quick help on \nnpm -l display full usage info\nnpm help search for help on \nnpm help npm involved overview\n\nSpecify configs in the ini-formatted file:\n /Users/olegkusov/.npmrc\nor on the command line via: npm --key value\nConfig info can be viewed via: npm help config\n\nnpm@6.14.5 /Users/olegkusov/.nvm/versions/node/v12.18.1/lib/node_modules/npm\n\n" stderr: "" Environment Zed: v0.77.3 (stable) OS: macOS 12.2.1 Memory: 16 GiB Architecture: aarch64 If applicable, add mockups / screenshots to help explain present your vision of the feature No response If applicable, attach your ~/Library/Logs/Zed/Zed.log file to this issue. If you only need the most recent lines, you can run the zed: open log command palette action to see the last 1000. No response After update, error is gone. Great. Feel free to close this issue. 🙂 We did land some fixes that should alleviate issues with language servers failing to download, so I will close this for now - feel free to reopen it if you experience it again.
gharchive/issue
2023-04-06T08:18:08
2025-04-01T06:46:20.760423
{ "authors": [ "JosephTLyons", "hovsater", "olegKusov" ], "repo": "zed-industries/community", "url": "https://github.com/zed-industries/community/issues/1366", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1735873873
0.88.3 crashes right at start I am on mac OS 11.7.7, here the backtrace: (base) robertosaccon@Robertos-MBP MacOS % cd /Applications/Zed.app/Contents/MacOS (base) robertosaccon@Robertos-MBP MacOS % ./zed { "thread": "main", "payload": "called Result::unwrap() on an Err value: could not find a non-empty font family matching one of the given names", "location_data": { "file": "crates/gpui/src/fonts.rs", "line": 301 }, "backtrace": [ " 0: backtrace::capture::Backtrace::new", " 1: Zed::init_panic_hook::{{closure}}", " 2: std::panicking::rust_panic_with_hook", " 3: std::panicking::begin_panic_handler::{{closure}}", " 4: std::sys_common::backtrace::__rust_end_short_backtrace", " 5: _rust_begin_unwind", " 6: core::panicking::panic_fmt", " 7: core::result::unwrap_failed", " 8: std::thread::local::LocalKey::with", " 9: <gpui::fonts::TextStyle as core::default::Default>::default", " 10: <theme::Theme as core::default::Default>::default", " 11: std::thread::local::LocalKey::with", " 12: theme::theme_registry::ThemeRegistry::new", " 13: theme::init", " 14: core::ops::function::FnOnce::call_once{{vtable.shim}}", " 15: gpui::platform::mac::platform::did_finish_launching", " 16: ", " 17: ", " 18: ", " 19: ", " 20: ", " 21: ", " 22: ", " 23: ", " 24: ", " 25: ", " 26: ", " 27: ", " 28: ", " 29: ", " 30: ", " 31: ", " 32: ", " 33: ", " 34: <gpui::platform::mac::platform::MacForegroundPlatform as gpui::platform::ForegroundPlatform>::run", " 35: gpui::app::App::run", " 36: Zed::main", " 37: std::sys_common::backtrace::__rust_begin_short_backtrace", " 38: std::rt::lang_start::{{closure}}", " 39: std::rt::lang_start_internal", " 40: _main", "" ] } Same here, with same OS version. @rsaccon are you running an Intel Mac too? Same here, on m2. Tho I didn't get any logs. it's crashing downright I am using macOS 13.0 and having an intel macbook. Crashes right away for me too To fix it I have uninstalled Zed (Removed from Applications) and using brew I installed it brew install --cask zed Installs version 0.87.6 and works like a charm Same - crashing on a Macbook Air M2 Same - crashing on a Macbook Air M1 Crashing on MacBook Air 2017 running macOS 12.6.6 I just wanted to let you know that the Zed team is looking into the crashes. In the meantime you could downgrade to 0.83.2. Sorry about the busted build everyone. As @hovsater said, we're looking into it. Hey all, we just hit publish on v0.88.4. Can you all download it and see if you can open Zed again? App unresponsive after start. No UI shows. Thanks for looking into this. Date/Time: 2023-06-01 08:41:52.138 -0700 End time: 2023-06-01 08:42:28.242 -0700 OS Version: macOS 12.6.5 (Build 21G531) Architecture: x86_64h Report Version: 35.1 Incident Identifier: D070ADE8-E25F-48F1-B750-86A8F79062EF Data Source: Stackshots Shared Cache: E0BE2D6E-28EF-33AB-8499-8D9D42337BE7 slid base address 0x7ff804394000, slide 0x4394000 Shared Cache: 57DB871C-A979-37D2-B7C1-B4A0600112B9 slid base address 0x7ff8040e3000, slide 0x40e3000 Command: zed Path: /Applications/Zed.app/Contents/MacOS/zed Identifier: dev.zed.Zed Version: 0.88.4 (20230601.145642) Team ID: MQ55VZLNZQ Architecture: x86_64 Parent: launchd [1] PID: 8813 Time Since Fork: 36s Event: hang Duration: 36.10s Duration Sampled: 1.10s (process was unresponsive for 35 seconds before sampling) Steps: 11 (100ms sampling interval) Hardware model: Macmini7,1 Active cpus: 4 HW page size: 4096 VM page size: 4096 Time Awake Since Boot: 1638211s Time Since Wake: n/a (machine hasn't slept) Fan speed: 1791 rpm Total CPU Time: 0.432s (1368.4M cycles, 938.3M instructions, 1.46c/i) Advisory levels: Battery -> 3, User -> 2, ThermalPressure -> 0, Combined -> 2 Free disk space: 98.13 GB/233.57 GB, low space threshold 3072 MB -------------------------------------------------- Timeline format: stacks are sorted chronologically Use -i and -heavy to re-report with count sorting -------------------------------------------------- Heaviest stack for the main thread of the target process: 11 start + 462 (dyld + 21806) [0x115e1852e] 11 main + 44 (zed + 123964) [0x10506143c] 11 std::rt::lang_start_internal::h733b9f59d292f4af + 784 (zed + 48069792) [0x107e1aca0] 11 std::rt::lang_start::_$u7b$$u7b$closure$u7d$$u7d$::h70fc64fde041709a + 12 (zed + 494828) [0x1050bbcec] 11 std::sys_common::backtrace::__rust_begin_short_backtrace::hf1f757012b4cb488 + 6 (zed + 129926) [0x105062b86] 11 Zed::main::h48a59bea541690ce + 3681 (zed + 115345) [0x10505f291] 11 gpui::app::App::run::h1575cbc2921946c8 + 197 (zed + 272197) [0x105085745] 11 _$LT$gpui..platform..mac..platform..MacForegroundPlatform$u20$as$u20$gpui..platform..ForegroundPlatform$GT$::run::ha82b014a05e5ad09 + 335 (zed + 42941695) [0x107936cff] 11 -[NSApplication run] + 586 (AppKit + 195801) [0x7ff8071a4cd9] 11 -[NSApplication(NSEvent) _nextEventMatchingEventMask:untilDate:inMode:dequeue:] + 1394 (AppKit + 251434) [0x7ff8071b262a] 11 _DPSNextEvent + 2036 (AppKit + 259010) [0x7ff8071b43c2] 11 AEProcessAppleEvent + 54 (HIToolbox + 261346) [0x7ff80d43ace2] 11 aeProcessAppleEvent + 419 (AE + 17183) [0x7ff80ae2c31f] 11 ??? (AE + 44250) [0x7ff80ae32cda] 11 ??? (AE + 46192) [0x7ff80ae33470] 11 _NSAppleEventManagerGenericHandler + 80 (Foundation + 214422) [0x7ff8055ac596] 11 -[NSAppleEventManager dispatchRawAppleEvent:withRawReply:handlerRefCon:] + 308 (Foundation + 214820) [0x7ff8055ac724] 11 -[NSApplication(NSAppleEventHandling) _handleCoreEvent:withReplyEvent:] + 665 (AppKit + 282000) [0x7ff8071b9d90] 11 -[NSApplication(NSAppleEventHandling) _handleAEOpenEvent:] + 541 (AppKit + 282940) [0x7ff8071ba13c] 11 -[NSApplication _sendFinishLaunchingNotification] + 208 (AppKit + 292204) [0x7ff8071bc56c] 11 -[NSApplication _postDidFinishNotification] + 305 (AppKit + 292894) [0x7ff8071bc81e] 11 -[NSNotificationCenter postNotificationName:object:userInfo:] + 82 (Foundation + 38910) [0x7ff8055817fe] 11 _CFXNotificationPost + 735 (CoreFoundation + 291784) [0x7ff8047433c8] 11 _CFXRegistrationPost + 496 (CoreFoundation + 1125040) [0x7ff80480eab0] 11 ___CFXRegistrationPost_block_invoke + 49 (CoreFoundation + 1125170) [0x7ff80480eb32] 11 __CFNOTIFICATIONCENTER_IS_CALLING_OUT_TO_AN_OBSERVER__ + 12 (CoreFoundation + 481084) [0x7ff80477173c] 11 gpui::platform::mac::platform::did_finish_launching::h3208ee52aad5f575 + 147 (zed + 42960979) [0x10793b853] 11 core::ops::function::FnOnce::call_once$u7b$$u7b$vtable.shim$u7d$$u7d$::hf534abb966ccfcd2 + 756 (zed + 222468) [0x105079504] 11 settings::settings_file::handle_settings_file_changes::h49a7f33614521b1e + 55 (zed + 37997079) [0x10747fa17] 11 async_io::driver::block_on::h9cee42696902a8ca + 344 (zed + 37993336) [0x10747eb78] 11 parking::Inner::park::h20d947b94d86abbc + 210 (zed + 46146114) [0x107c45242] 11 std::sync::condvar::Condvar::wait::h0ab9c89f54776edb + 112 (zed + 46143984) [0x107c449f0] 11 __psynch_cvwait + 10 (libsystem_kernel.dylib + 17370) [0x7ff80467b3da] *11 psynch_cvcontinue + 0 (pthread + 20825) [0xffffff800376d159] Process: zed (Zed) [8813] [unique pid 108302] UUID: C901A235-E767-3C4E-AD03-F79B5B74F3F1 Path: /Applications/Zed.app/Contents/MacOS/zed Identifier: dev.zed.Zed Version: 0.88.4 (20230601.145642) Team ID: MQ55VZLNZQ Shared Cache: E0BE2D6E-28EF-33AB-8499-8D9D42337BE7 slid base address 0x7ff804394000, slide 0x4394000 Architecture: x86_64 Parent: launchd [1] UID: 501 Footprint: 40.75 MB Time Since Fork: 36s Num samples: 11 (1-11) Note: Unresponsive for 35 seconds before sampling Note: 1 idle work queue thread omitted Thread 0x5749b0 DispatchQueue "com.apple.main-thread"(1) Thread name "main" 11 samples (1-11) priority 46 (base 46) <thread QoS user interactive (requested user interactive), process unclamped, process received importance donation from WindowServer [150], IO tier 0> 11 start + 462 (dyld + 21806) [0x115e1852e] 1-11 11 main + 44 (zed + 123964) [0x10506143c] 1-11 11 std::rt::lang_start_internal::h733b9f59d292f4af + 784 (zed + 48069792) [0x107e1aca0] 1-11 11 std::rt::lang_start::_$u7b$$u7b$closure$u7d$$u7d$::h70fc64fde041709a + 12 (zed + 494828) [0x1050bbcec] 1-11 11 std::sys_common::backtrace::__rust_begin_short_backtrace::hf1f757012b4cb488 + 6 (zed + 129926) [0x105062b86] 1-11 11 Zed::main::h48a59bea541690ce + 3681 (zed + 115345) [0x10505f291] 1-11 11 gpui::app::App::run::h1575cbc2921946c8 + 197 (zed + 272197) [0x105085745] 1-11 11 _$LT$gpui..platform..mac..platform..MacForegroundPlatform$u20$as$u20$gpui..platform..ForegroundPlatform$GT$::run::ha82b014a05e5ad09 + 335 (zed + 42941695) [0x107936cff] 1-11 11 -[NSApplication run] + 586 (AppKit + 195801) [0x7ff8071a4cd9] 1-11 11 -[NSApplication(NSEvent) _nextEventMatchingEventMask:untilDate:inMode:dequeue:] + 1394 (AppKit + 251434) [0x7ff8071b262a] 1-11 11 _DPSNextEvent + 2036 (AppKit + 259010) [0x7ff8071b43c2] 1-11 11 AEProcessAppleEvent + 54 (HIToolbox + 261346) [0x7ff80d43ace2] 1-11 11 aeProcessAppleEvent + 419 (AE + 17183) [0x7ff80ae2c31f] 1-11 11 ??? (AE + 44250) [0x7ff80ae32cda] 1-11 11 ??? (AE + 46192) [0x7ff80ae33470] 1-11 11 _NSAppleEventManagerGenericHandler + 80 (Foundation + 214422) [0x7ff8055ac596] 1-11 11 -[NSAppleEventManager dispatchRawAppleEvent:withRawReply:handlerRefCon:] + 308 (Foundation + 214820) [0x7ff8055ac724] 1-11 11 -[NSApplication(NSAppleEventHandling) _handleCoreEvent:withReplyEvent:] + 665 (AppKit + 282000) [0x7ff8071b9d90] 1-11 11 -[NSApplication(NSAppleEventHandling) _handleAEOpenEvent:] + 541 (AppKit + 282940) [0x7ff8071ba13c] 1-11 11 -[NSApplication _sendFinishLaunchingNotification] + 208 (AppKit + 292204) [0x7ff8071bc56c] 1-11 11 -[NSApplication _postDidFinishNotification] + 305 (AppKit + 292894) [0x7ff8071bc81e] 1-11 11 -[NSNotificationCenter postNotificationName:object:userInfo:] + 82 (Foundation + 38910) [0x7ff8055817fe] 1-11 11 _CFXNotificationPost + 735 (CoreFoundation + 291784) [0x7ff8047433c8] 1-11 11 _CFXRegistrationPost + 496 (CoreFoundation + 1125040) [0x7ff80480eab0] 1-11 11 ___CFXRegistrationPost_block_invoke + 49 (CoreFoundation + 1125170) [0x7ff80480eb32] 1-11 11 __CFNOTIFICATIONCENTER_IS_CALLING_OUT_TO_AN_OBSERVER__ + 12 (CoreFoundation + 481084) [0x7ff80477173c] 1-11 11 gpui::platform::mac::platform::did_finish_launching::h3208ee52aad5f575 + 147 (zed + 42960979) [0x10793b853] 1-11 11 core::ops::function::FnOnce::call_once$u7b$$u7b$vtable.shim$u7d$$u7d$::hf534abb966ccfcd2 + 756 (zed + 222468) [0x105079504] 1-11 11 settings::settings_file::handle_settings_file_changes::h49a7f33614521b1e + 55 (zed + 37997079) [0x10747fa17] 1-11 11 async_io::driver::block_on::h9cee42696902a8ca + 344 (zed + 37993336) [0x10747eb78] 1-11 11 parking::Inner::park::h20d947b94d86abbc + 210 (zed + 46146114) [0x107c45242] 1-11 11 std::sync::condvar::Condvar::wait::h0ab9c89f54776edb + 112 (zed + 46143984) [0x107c449f0] 1-11 11 __psynch_cvwait + 10 (libsystem_kernel.dylib + 17370) [0x7ff80467b3da] 1-11 *11 psynch_cvcontinue + 0 (pthread + 20825) [0xffffff800376d159] 1-11 \ For me now it works again with v0.88.4. Thanks guys for fixing the issue. Thanks for the quick update! Unfortunately, v0.88.4 opens but also crashes on my system whenever I select a theme. I'm on an M1 Pro chip with macOS 12.6.5. Thanks for the quick update! Unfortunately, v0.88.4 opens but also crashes on my system whenever I select a theme. I'm on an M1 Pro chip with macOS 12.6.5. Do you have anything in the logs or a crash report? Would you mind opening a new issue if so? 0.88.3 and 0.88.4 - App unresponsive after start. No UI shows. Thanks for looking into this. Date/Time: 2023-06-01 08:41:52.138 -0700 End time: 2023-06-01 08:42:28.242 -0700 OS Version: macOS 12.6.5 (Build 21G531) Architecture: x86_64h Report Version: 35.1 Incident Identifier: D070ADE8-E25F-48F1-B750-86A8F79062EF Data Source: Stackshots Shared Cache: E0BE2D6E-28EF-33AB-8499-8D9D42337BE7 slid base address 0x7ff804394000, slide 0x4394000 Shared Cache: 57DB871C-A979-37D2-B7C1-B4A0600112B9 slid base address 0x7ff8040e3000, slide 0x40e3000 Command: zed Path: /Applications/Zed.app/Contents/MacOS/zed Identifier: dev.zed.Zed Version: 0.88.4 (20230601.145642) Team ID: MQ55VZLNZQ Architecture: x86_64 Parent: launchd [1] PID: 8813 Time Since Fork: 36s Event: hang Duration: 36.10s Duration Sampled: 1.10s (process was unresponsive for 35 seconds before sampling) Steps: 11 (100ms sampling interval) Hardware model: Macmini7,1 Active cpus: 4 HW page size: 4096 VM page size: 4096 Time Awake Since Boot: 1638211s Time Since Wake: n/a (machine hasn't slept) Fan speed: 1791 rpm Total CPU Time: 0.432s (1368.4M cycles, 938.3M instructions, 1.46c/i) Advisory levels: Battery -> 3, User -> 2, ThermalPressure -> 0, Combined -> 2 Free disk space: 98.13 GB/233.57 GB, low space threshold 3072 MB -------------------------------------------------- Timeline format: stacks are sorted chronologically Use -i and -heavy to re-report with count sorting -------------------------------------------------- Heaviest stack for the main thread of the target process: 11 start + 462 (dyld + 21806) [0x115e1852e] 11 main + 44 (zed + 123964) [0x10506143c] 11 std::rt::lang_start_internal::h733b9f59d292f4af + 784 (zed + 48069792) [0x107e1aca0] 11 std::rt::lang_start::_$u7b$$u7b$closure$u7d$$u7d$::h70fc64fde041709a + 12 (zed + 494828) [0x1050bbcec] 11 std::sys_common::backtrace::__rust_begin_short_backtrace::hf1f757012b4cb488 + 6 (zed + 129926) [0x105062b86] 11 Zed::main::h48a59bea541690ce + 3681 (zed + 115345) [0x10505f291] 11 gpui::app::App::run::h1575cbc2921946c8 + 197 (zed + 272197) [0x105085745] 11 _$LT$gpui..platform..mac..platform..MacForegroundPlatform$u20$as$u20$gpui..platform..ForegroundPlatform$GT$::run::ha82b014a05e5ad09 + 335 (zed + 42941695) [0x107936cff] 11 -[NSApplication run] + 586 (AppKit + 195801) [0x7ff8071a4cd9] 11 -[NSApplication(NSEvent) _nextEventMatchingEventMask:untilDate:inMode:dequeue:] + 1394 (AppKit + 251434) [0x7ff8071b262a] 11 _DPSNextEvent + 2036 (AppKit + 259010) [0x7ff8071b43c2] 11 AEProcessAppleEvent + 54 (HIToolbox + 261346) [0x7ff80d43ace2] 11 aeProcessAppleEvent + 419 (AE + 17183) [0x7ff80ae2c31f] 11 ??? (AE + 44250) [0x7ff80ae32cda] 11 ??? (AE + 46192) [0x7ff80ae33470] 11 _NSAppleEventManagerGenericHandler + 80 (Foundation + 214422) [0x7ff8055ac596] 11 -[NSAppleEventManager dispatchRawAppleEvent:withRawReply:handlerRefCon:] + 308 (Foundation + 214820) [0x7ff8055ac724] 11 -[NSApplication(NSAppleEventHandling) _handleCoreEvent:withReplyEvent:] + 665 (AppKit + 282000) [0x7ff8071b9d90] 11 -[NSApplication(NSAppleEventHandling) _handleAEOpenEvent:] + 541 (AppKit + 282940) [0x7ff8071ba13c] 11 -[NSApplication _sendFinishLaunchingNotification] + 208 (AppKit + 292204) [0x7ff8071bc56c] 11 -[NSApplication _postDidFinishNotification] + 305 (AppKit + 292894) [0x7ff8071bc81e] 11 -[NSNotificationCenter postNotificationName:object:userInfo:] + 82 (Foundation + 38910) [0x7ff8055817fe] 11 _CFXNotificationPost + 735 (CoreFoundation + 291784) [0x7ff8047433c8] 11 _CFXRegistrationPost + 496 (CoreFoundation + 1125040) [0x7ff80480eab0] 11 ___CFXRegistrationPost_block_invoke + 49 (CoreFoundation + 1125170) [0x7ff80480eb32] 11 __CFNOTIFICATIONCENTER_IS_CALLING_OUT_TO_AN_OBSERVER__ + 12 (CoreFoundation + 481084) [0x7ff80477173c] 11 gpui::platform::mac::platform::did_finish_launching::h3208ee52aad5f575 + 147 (zed + 42960979) [0x10793b853] 11 core::ops::function::FnOnce::call_once$u7b$$u7b$vtable.shim$u7d$$u7d$::hf534abb966ccfcd2 + 756 (zed + 222468) [0x105079504] 11 settings::settings_file::handle_settings_file_changes::h49a7f33614521b1e + 55 (zed + 37997079) [0x10747fa17] 11 async_io::driver::block_on::h9cee42696902a8ca + 344 (zed + 37993336) [0x10747eb78] 11 parking::Inner::park::h20d947b94d86abbc + 210 (zed + 46146114) [0x107c45242] 11 std::sync::condvar::Condvar::wait::h0ab9c89f54776edb + 112 (zed + 46143984) [0x107c449f0] 11 __psynch_cvwait + 10 (libsystem_kernel.dylib + 17370) [0x7ff80467b3da] *11 psynch_cvcontinue + 0 (pthread + 20825) [0xffffff800376d159] Process: zed (Zed) [8813] [unique pid 108302] UUID: C901A235-E767-3C4E-AD03-F79B5B74F3F1 Path: /Applications/Zed.app/Contents/MacOS/zed Identifier: dev.zed.Zed Version: 0.88.4 (20230601.145642) Team ID: MQ55VZLNZQ Shared Cache: E0BE2D6E-28EF-33AB-8499-8D9D42337BE7 slid base address 0x7ff804394000, slide 0x4394000 Architecture: x86_64 Parent: launchd [1] UID: 501 Footprint: 40.75 MB Time Since Fork: 36s Num samples: 11 (1-11) Note: Unresponsive for 35 seconds before sampling Note: 1 idle work queue thread omitted Thread 0x5749b0 DispatchQueue "com.apple.main-thread"(1) Thread name "main" 11 samples (1-11) priority 46 (base 46) <thread QoS user interactive (requested user interactive), process unclamped, process received importance donation from WindowServer [150], IO tier 0> 11 start + 462 (dyld + 21806) [0x115e1852e] 1-11 11 main + 44 (zed + 123964) [0x10506143c] 1-11 11 std::rt::lang_start_internal::h733b9f59d292f4af + 784 (zed + 48069792) [0x107e1aca0] 1-11 11 std::rt::lang_start::_$u7b$$u7b$closure$u7d$$u7d$::h70fc64fde041709a + 12 (zed + 494828) [0x1050bbcec] 1-11 11 std::sys_common::backtrace::__rust_begin_short_backtrace::hf1f757012b4cb488 + 6 (zed + 129926) [0x105062b86] 1-11 11 Zed::main::h48a59bea541690ce + 3681 (zed + 115345) [0x10505f291] 1-11 11 gpui::app::App::run::h1575cbc2921946c8 + 197 (zed + 272197) [0x105085745] 1-11 11 _$LT$gpui..platform..mac..platform..MacForegroundPlatform$u20$as$u20$gpui..platform..ForegroundPlatform$GT$::run::ha82b014a05e5ad09 + 335 (zed + 42941695) [0x107936cff] 1-11 11 -[NSApplication run] + 586 (AppKit + 195801) [0x7ff8071a4cd9] 1-11 11 -[NSApplication(NSEvent) _nextEventMatchingEventMask:untilDate:inMode:dequeue:] + 1394 (AppKit + 251434) [0x7ff8071b262a] 1-11 11 _DPSNextEvent + 2036 (AppKit + 259010) [0x7ff8071b43c2] 1-11 11 AEProcessAppleEvent + 54 (HIToolbox + 261346) [0x7ff80d43ace2] 1-11 11 aeProcessAppleEvent + 419 (AE + 17183) [0x7ff80ae2c31f] 1-11 11 ??? (AE + 44250) [0x7ff80ae32cda] 1-11 11 ??? (AE + 46192) [0x7ff80ae33470] 1-11 11 _NSAppleEventManagerGenericHandler + 80 (Foundation + 214422) [0x7ff8055ac596] 1-11 11 -[NSAppleEventManager dispatchRawAppleEvent:withRawReply:handlerRefCon:] + 308 (Foundation + 214820) [0x7ff8055ac724] 1-11 11 -[NSApplication(NSAppleEventHandling) _handleCoreEvent:withReplyEvent:] + 665 (AppKit + 282000) [0x7ff8071b9d90] 1-11 11 -[NSApplication(NSAppleEventHandling) _handleAEOpenEvent:] + 541 (AppKit + 282940) [0x7ff8071ba13c] 1-11 11 -[NSApplication _sendFinishLaunchingNotification] + 208 (AppKit + 292204) [0x7ff8071bc56c] 1-11 11 -[NSApplication _postDidFinishNotification] + 305 (AppKit + 292894) [0x7ff8071bc81e] 1-11 11 -[NSNotificationCenter postNotificationName:object:userInfo:] + 82 (Foundation + 38910) [0x7ff8055817fe] 1-11 11 _CFXNotificationPost + 735 (CoreFoundation + 291784) [0x7ff8047433c8] 1-11 11 _CFXRegistrationPost + 496 (CoreFoundation + 1125040) [0x7ff80480eab0] 1-11 11 ___CFXRegistrationPost_block_invoke + 49 (CoreFoundation + 1125170) [0x7ff80480eb32] 1-11 11 __CFNOTIFICATIONCENTER_IS_CALLING_OUT_TO_AN_OBSERVER__ + 12 (CoreFoundation + 481084) [0x7ff80477173c] 1-11 11 gpui::platform::mac::platform::did_finish_launching::h3208ee52aad5f575 + 147 (zed + 42960979) [0x10793b853] 1-11 11 core::ops::function::FnOnce::call_once$u7b$$u7b$vtable.shim$u7d$$u7d$::hf534abb966ccfcd2 + 756 (zed + 222468) [0x105079504] 1-11 11 settings::settings_file::handle_settings_file_changes::h49a7f33614521b1e + 55 (zed + 37997079) [0x10747fa17] 1-11 11 async_io::driver::block_on::h9cee42696902a8ca + 344 (zed + 37993336) [0x10747eb78] 1-11 11 parking::Inner::park::h20d947b94d86abbc + 210 (zed + 46146114) [0x107c45242] 1-11 11 std::sync::condvar::Condvar::wait::h0ab9c89f54776edb + 112 (zed + 46143984) [0x107c449f0] 1-11 11 __psynch_cvwait + 10 (libsystem_kernel.dylib + 17370) [0x7ff80467b3da] 1-11 *11 psynch_cvcontinue + 0 (pthread + 20825) [0xffffff800376d159] 1-11 \ This actually looks like this other issue were about to release a fix for. I'm going to close this issue, as I think we fixed the issue specific to this. If you run into a crash mentioning could not find a non-empty font family matching one of the given on v0.88.4 or higher, please re-open.
gharchive/issue
2023-06-01T09:28:33
2025-04-01T06:46:20.787509
{ "authors": [ "Emblaze", "Gbuomprisco", "Hala2Madrid", "JosephTLyons", "Sameer2307", "Sparkenstein", "carojkov", "gogocharli", "hovsater", "rsaccon", "srikanth-wgl" ], "repo": "zed-industries/community", "url": "https://github.com/zed-industries/community/issues/1592", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1321626897
Astro Support Check for existing issues [X] Completed Language Astro Tree Sitter parser link https://github.com/virchau13/tree-sitter-astro Language server link https://github.com/withastro/language-tools/tree/main/packages/language-server Misc notes No response Ref: #600 Any updates on this? Just started using zed but would love to see Astro support Me too—I notice Svelte support got added not so long ago, any chance for sneaking Astro in there?
gharchive/issue
2022-07-29T00:38:53
2025-04-01T06:46:20.791590
{ "authors": [ "BryanSchuetz", "jasikpark", "max", "philstainer" ], "repo": "zed-industries/community", "url": "https://github.com/zed-industries/community/issues/402", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
329628020
Update README with Example Closes #18 CC: @pranaygp Thanks for helping @mfix22 I wonder if we can have a smaller example rather that just showcases the API, and direct them to /examples for more (which you've already done 😄). Maybe something like: const Sema = require('async-sema'); const s = new Sema(4, { capactiy: 100 }) // 4 async calls in "parallel", upto 100 async calls on this sema async function fetchData() { await s.acquire() console.log(s.nrWaiting() + ' calls to fetch are waiting') // ... do some long async stuff s.release() } for(let i = 0; i < 100; i++) { fetchData() } What do you think? @pranaygp I was thinking the same thing, but didn't know if you wanted a working example or a "stub" example. Updating . . . . this should still be a working example right? @pranaygp definitely, i was just referring to the "... do some long async stuff" 😄
gharchive/pull-request
2018-06-05T21:16:04
2025-04-01T06:46:20.839737
{ "authors": [ "mfix22", "pranaygp" ], "repo": "zeit/async-sema", "url": "https://github.com/zeit/async-sema/pull/19", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
397259507
Hyper (Windows) hangs and becomes unkillable [x] I am on the latest Hyper.app version [x] I have searched the issues of this repo and believe that this is not a duplicate OS version and name: Windows 10.0.17134 Home Hyper.app version: 2.1.1 (was happening also on 2.0.0) Link of a Gist with the contents of your .hyper.js: https://gist.github.com/CAFxX/29719b9b1ce74ba41c5103d4c2e70836 Relevant information from devtools (CMD+ALT+I on macOS, CTRL+SHIFT+I elsewhere): N/A (process is not responding) The issue is reproducible in vanilla Hyper.app: Haven't tested. Issue Sometimes (randomly, I've seen it happen sometimes when using ssh) the hyper window freezes, the title bar changes to "Hyper (not responding)", a popup appears saying that the application is not responding and prompting "End process" or "Cancel" (both sort no effect). Trying to kill the process from task manager fails with "Unable to terminate process / The operation could not be completed / Access is denied". Trying to kill the process from Process Explorer (running as administrator) similarly yields "Error trying to terminate Hyper.exe: access is denied". Curiously Process Explorer flags the Hyper.exe process as "Suspended". The only way I found so far to kill the process is by rebooting the system. The only similar issue found so far is https://github.com/zeit/hyper/issues/3342 but in my case the Hyper process/window never recovers and can't be killed so I suppose the two issues are different. Happens on mac as well, pretty much any time i don't use it longer than a minute it takes a long time to wake up again. This used to be a problem in 1.x once but it was fixed. Now it's back. @drcmda in our case the situation never recovers ("never wakes up again"). Since you wrote that in your case "it takes a long time to wake up again" I suppose yours is a different issue. Same issue, had a ssh connection active and after laptop wakes up after sleep Hyper hangs. Unable to kill process from task manager as well. 👍Same on macOS Mojave 10.14.5 and with Hyper 3.0.2. I don't know exactly how to reproduce but very often I end up in unkillable windows. I am having the same issue. But it just hangs on its own. It does not have to be idle!!! I have the same problem. I notice it happens consistently when I have an open SSH connection in one of my Hyper tabs. I have the same problem. I can play with these steps: split tab with hotkey Ctrl + Shift + E. try closing using hotkey hotkey Ctrl + Shift + W. ^ Can reproduce with the same steps Often it happens while I am running file watchers and then open/split into a new tab with CMD + Shift + E or closing an active tab with CMD + Shift + W. Happening on newest macOS but also on 1 version older. @jackblackCH Have a look at #3749. No fix yet for this issue? For me happening as well. Using on MacOS. I really love the Hyper but I'm forced to switch to iTerm. Can't keep loosing all my open tabs every time once the Hyper stops responding. I sadly had to switch back as well, the stable version became too unstable and time consuming. This happens to me when an SSH session times out It happens on Windows as well.
gharchive/issue
2019-01-09T08:25:36
2025-04-01T06:46:20.853735
{ "authors": [ "AshMcConnell", "CAFxX", "bet4it", "doojin", "drcmda", "imkin", "jackblackCH", "krafix", "ludugz", "nbau21", "neilpalima", "papmech", "pauloprescendo" ], "repo": "zeit/hyper", "url": "https://github.com/zeit/hyper/issues/3402", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
269448851
Multiple keymaps and mousetrap This is a HUGE PR. I tried to clean my git history but this is really hard to rebase with some merge commits 😞 This PR have 2 goals: Add feature to specify multiple keymaps for a command Use mousetrap to intercept and interpret key events because it is smarter than Electron I will not explain how worked keymaps but I need to rewrite it a lot. This how it works now: Keymap configuration Configuration is now stored in a unique place. A normalized keymaps object directly in main config object: { 'tab:new': ['command+t'], 'tab:jump:prefix': ['command]'], ... } User can define a command with a single shortcut or an array of shortcuts. Plugin can add some command/keymaps using a new function: decorateKeymaps that is applied like other decorating functions: keymap is passed to this plugin handler and plugin can add a shortcut to a command or completely overwrite/remove a shortcut. Previous extendKeymaps function has been marked as deprecated. Normalization consists of these steps : All shortcuts are transform to array if they are a string. All deprecated shortcuts are checked and transformed. For now, only cmd is warned and transformed to command to suit mousetrap requirements. All *:prefix commands are extrapolated to *:1, *:2 ... *:last commands. Mousetrap interception When hyper container is mounted, mousetrap is instanciated and all keybindings are made. For now, my fork is declared as dependency. I made a PR to include an option to make mousetrap to intercept keys on capturing phase and not only on bubbling phase. Why? Without this feature, xterm would receive keys before mousetrap and we would need to duplicate key analysis to know if xterm should ignore this event or not. With this feature, every mousetrap binding flags key events and xterm keyboard handler can rely on this flag to ignore events or not. Command triggering When a configured shortcut is hit, binded mousetrap function is called. This function verifies if there is an handler registered by a plugin for this command. If there is one, it call this handler with key event in parameter. If not, this function dispatch a redux action that make a RPC call as a side effect. This redux action can be logged, changed or discarded by plugins like any redux action. RPC call is command with command as parameter (like 'pane:splitVertical'). This RPC is binded to commands that can trigger an RPC call to focusedWindow. Some action like copy/paste are handled directly by Electron with its menu-item role mechanism. In order to trigger them, we need to intercept them by mousetrap like other actions, but we can't preventDefault in order to let Electron act. That's why we have a whitelist of actions that should not be prevented. We need to prevent default for other events because commands are registered with mousetrap binding to have multiple shortcuts and Electron's menuItems to be clicked. If defaults were not prevented, commands would be triggered twice. Some actions could be trigger directly by mousetrap binded function because this is renderer-only actions (like pane splitting) but I prefer to stick to actual behaviour: command are always triggered by mainProcess even if its source is a shotrcut hit in BrowserWindow. What is missing? [ ] More tests [ ] Documentation cc @rauchg @albinekb Adding a big feature and removing 600 lines. That's a way, I like it This PR looks fantastic to me. Let's get it into canary! Great work @chabou
gharchive/pull-request
2017-10-29T23:42:31
2025-04-01T06:46:20.866004
{ "authors": [ "albinekb", "chabou", "rauchg" ], "repo": "zeit/hyper", "url": "https://github.com/zeit/hyper/pull/2412", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
519571626
Invalid URL for nodejs binaries As in title, the URL that is generated for curl is invalid. It gives: https://nodejs.org/dist/v12.13.0/node-v12.13.0-linux-debian_8.11-x86_64-linux-debian_8.11-x86_64.tar.gz which just turns up a 402 error on nodejs's website. ➜ ./install-node_lts.sh --prefix $HOME/.local/share Configuration > Version: v12.13.0 (resolved from lts) > Prefix: /users/jrwrigh/.local/share > Platform: linux-debian_8.11-x86_64 > Arch: linux-debian_8.11-x86_64 > Tarball URL: https://nodejs.org/dist/v12.13.0/node-v12.13.0-linux-debian_8.11-x86_64-linux-debian_8.11-x86_64.tar.gz ! Prefix bin directory /users/jrwrigh/.local/share/bin is not in your $PATH ? Install Node.js v12.13.0 to /users/jrwrigh/.local/share? [yN] y > Installing Node.js, please wait… x Command failed (exit code 22): curl --silent --fail https://nodejs.org/dist/v12.13.0/node-v12.13.0-linux-debian_8.11-x86_64-linux-debian_8.11-x86_64.tar.gz gzip: stdin: unexpected end of file tar: Child returned status 1 tar: Error is not recoverable: exiting now Figured out the issue: My session had $PLATFORM and $ARCH already defined. Is there a way to ignore those definitions in the script?
gharchive/issue
2019-11-07T23:27:51
2025-04-01T06:46:20.867999
{ "authors": [ "jrwrigh" ], "repo": "zeit/install-node", "url": "https://github.com/zeit/install-node/issues/4", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
392453750
Use single call on aliases and certs on domain rm This will make a single call to get aliases and certs while removing a domain. Codecov Report Merging #1763 into master will not change coverage. The diff coverage is n/a. @@ Coverage Diff @@ ## master #1763 +/- ## ======================================= Coverage 70.96% 70.96% ======================================= Files 12 12 Lines 341 341 ======================================= Hits 242 242 Misses 99 99 Continue to review full report at Codecov. Legend - Click here to learn more Δ = absolute <relative> (impact), ø = not affected, ? = missing data Powered by Codecov. Last update e160e4e...ee603d7. Read the comment docs.
gharchive/pull-request
2018-12-19T06:05:32
2025-04-01T06:46:20.873427
{ "authors": [ "codecov-io", "joecohens" ], "repo": "zeit/now-cli", "url": "https://github.com/zeit/now-cli/pull/1763", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
456372939
[now update] Handle permissions errors in the file replacement method Handle permissions errors in both update mechanisms Handle EBUSY (Windows) as a file busy error Update permissions error prompt to say "Administrator Command Prompt" on Windows Codecov Report Merging #2430 into canary will decrease coverage by 0.41%. The diff coverage is 0%. @@ Coverage Diff @@ ## canary #2430 +/- ## ========================================== - Coverage 11.2% 10.78% -0.42% ========================================== Files 251 251 Lines 9241 9209 -32 Branches 1033 1026 -7 ========================================== - Hits 1035 993 -42 - Misses 8103 8114 +11 + Partials 103 102 -1 Impacted Files Coverage Δ src/commands/update.ts 0% <0%> (ø) :arrow_up: src/util/dev/builder-cache.ts 67.27% <0%> (-9.05%) :arrow_down: Continue to review full report at Codecov. Legend - Click here to learn more Δ = absolute <relative> (impact), ø = not affected, ? = missing data Powered by Codecov. Last update c368f98...819d6ce. Read the comment docs.
gharchive/pull-request
2019-06-14T18:12:09
2025-04-01T06:46:20.881363
{ "authors": [ "TooTallNate", "codecov-io" ], "repo": "zeit/now-cli", "url": "https://github.com/zeit/now-cli/pull/2430", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
230761210
Namespace your map variables Other libraries use variables such as breakpoints. Consider name spacing the breakpoints map variable to avoid conflicts for example $typi-breakpoints: Did you find a conflict in these namespaces yourself? I'm wondering cause I didn't feel that's a problem, but happy to hear more and investigate it. Agree with this, i use a custom breakpoint function, called '$breakpoint', which fails with this as the format is not compatible with your breakpoint. Can I easily change the your name to $typi-breakpoints? @tomdowning82 Not right now. You need to change provide a $breakpoint variable for each typi mixin you call right now. I have yet to abstract these away. May need some time before I can get to this. But point taken. If you can send a pull request, that'll help out a ton.
gharchive/issue
2017-05-23T15:59:24
2025-04-01T06:46:20.885017
{ "authors": [ "cjwd", "tomdowning82", "zellwk" ], "repo": "zellwk/typi", "url": "https://github.com/zellwk/typi/issues/36", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
130356136
Extract transmission logic from Producer Transmission is responsible for sending buffered messages to the brokers in a cluster. There's no retry logic – that stuff is still handled by Producer. Downside: the mutable MessageBuffer will be shared between Producer and Transmission. Part of the response handling is marking messages as sent in the buffer, so it's hard to avoid. I'm not quite sure about the name. The idea is to represent a single pass at transmitting messages to the cluster, but I'm not sure if we should instead use something closer to "send", e.g. MessageSender... but that sounds pretty horrible... @bquorning I'd like your take on this. Messenger? @bquorning it sends message batches, possible subdividing the batches and spreading them among multiple recipients. In that case Transmitter sounds better than Messenger. But I’m still not sure it’s the right name. Hmm. I'll leave it as is, then. I can always rename it, I guess, since it's not part of the public API. I should probably mark APIs as public/private...
gharchive/pull-request
2016-02-01T12:58:33
2025-04-01T06:46:20.913208
{ "authors": [ "bquorning", "dasch" ], "repo": "zendesk/ruby-kafka", "url": "https://github.com/zendesk/ruby-kafka/pull/41", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
60099317
Enhancement: added hhvm as allow failure as the page runs not on hhvm it should be marked as allow_failures. this would also be required as of pr #421 and the hhvm exception bug in phpunit - as an fix is not landed in phpunit yet. PHPUnit Bug: https://github.com/sebastianbergmann/phpunit-mock-objects/issues/207 @Ocramius Do you have any objections? I'm fine with it! Thanks, @ins0!
gharchive/pull-request
2015-03-06T13:04:59
2025-04-01T06:46:20.915681
{ "authors": [ "ins0", "localheinz" ], "repo": "zendframework/modules.zendframework.com", "url": "https://github.com/zendframework/modules.zendframework.com/pull/475", "license": "BSD-3-Clause", "license_type": "permissive", "license_source": "github-api" }
533865874
Update .travis.yml [x] Is this related to quality assurance? Thanks, @andreybolonin!
gharchive/pull-request
2019-12-06T10:08:42
2025-04-01T06:46:20.916748
{ "authors": [ "andreybolonin", "michalbundyra" ], "repo": "zendframework/zend-diactoros", "url": "https://github.com/zendframework/zend-diactoros/pull/382", "license": "BSD-3-Clause", "license_type": "permissive", "license_source": "github-api" }
354311731
Documentation task: write a recipe for extract HTTP request data if authenticated As suggested in #31, we need to write a recipe to extract HTTP data from the request, if authenticated. This repository has been closed and moved to mezzio/mezzio-authentication; a new issue has been opened at https://github.com/mezzio/mezzio-authentication/issues/3.
gharchive/issue
2018-08-27T13:08:32
2025-04-01T06:46:20.918090
{ "authors": [ "ezimuel", "weierophinney" ], "repo": "zendframework/zend-expressive-authentication", "url": "https://github.com/zendframework/zend-expressive-authentication/issues/36", "license": "BSD-3-Clause", "license_type": "permissive", "license_source": "github-api" }
308685972
Feature: Add PHP Pug as available templating systems inner installer Hello, I would like to propose as an add-on, the option to use PHP Pug https://github.com/pug-php/pug as a template engine. I already have an adapter for Zend\Expressive\Template\TemplateRendererInterface working https://github.com/kpicaza/infw-pug. On the other hand, I made a fork and i added the default templates and settings in the installer https://github.com/kpicaza/zend-expressive-skeleton/tree/feature/pug-template-renderer. If you think it's a good idea, I'll be happy to do the pull request and create the corresponding documentation for the Expressive website. Greetings. Although I appreciate the offer, I'm not sure if it's a good idea to add more packages. Specifically packages that are not stable yet. But that's just my opinion. However that doesn't mean you can't use your package right now. There is a feature to install alternative packages: Instead of entering one of the selection you can actually type the package name and version. Which template engine do you want to use? [1] Plates [2] Twig [3] zend-view installs zend-servicemanager [n] None of the above Make your selection or type a composer package name and version (n): infw/pug:0.1 - Searching for infw/pug:0.1 - Adding package infw/pug (0.1) That feature has been there from the beginning and it allows you to install any alternative package you want. It has its limitations though: You need to enter the alternative package as namespace/package:1.0. It needs the version. I can't remember why I coded it like that, but I think it had to do with the way composer works or my limited composer api knowledge at that time :) Templates are not copied, but you can configure your ConfigProvider in such way that it uses the default templates directly from the package itself. This doesn't work for containers as the container.php file needs to be copied. Thanks a lot for response. I have made already some adapters for my projects using pug with Expressive(and i'm happy with it;-D). I love Zend Expressive and I would enjoy helping in anything that I can. Best Regards. Closing this in favor of #267
gharchive/issue
2018-03-26T18:31:29
2025-04-01T06:46:20.923442
{ "authors": [ "kpicaza", "xtreamwayz" ], "repo": "zendframework/zend-expressive-skeleton", "url": "https://github.com/zendframework/zend-expressive-skeleton/issues/248", "license": "BSD-3-Clause", "license_type": "permissive", "license_source": "github-api" }
389065832
Added missing test case to test extraction with naming strategy and strategy It should increase coverage just a small bit Thanks, @webimpress!
gharchive/pull-request
2018-12-09T21:25:53
2025-04-01T06:46:20.924497
{ "authors": [ "webimpress", "weierophinney" ], "repo": "zendframework/zend-hydrator", "url": "https://github.com/zendframework/zend-hydrator/pull/92", "license": "BSD-3-Clause", "license_type": "permissive", "license_source": "github-api" }
79142717
Zend_Gdata_MediaMimeStream sets invalid content type in 1.12.13 PHP Fatal error: Uncaught exception 'Zend_Http_Header_Exception_InvalidArgumentException' with message 'Invalid header value' in /usr/share/php/zend-framework-1.12.13/Zend/Http/Header/HeaderValue.php:124 Stack trace: #0 /usr/share/php/zend-framework-1.12.13/Zend/Http/Client.php(1605): Zend_Http_Header_HeaderValue::assertValid('multipart/relat...') #1 /usr/share/php/zend-framework-1.12.13/Zend/Http/Client.php(467): Zend_Http_Client->_validateHeaderValue('multipart/relat...') #2 /usr/share/php/zend-framework-1.12.13/Zend/Http/Client.php(439): Zend_Http_Client->setHeaders('Content-Type', 'multipart/relat...') #3 /usr/share/php/zend-framework-1.12.13/Zend/Gdata/App.php(650): Zend_Http_Client->setHeaders(Array) #4 /usr/share/php/zend-framework-1.12.13/Zend/Gdata.php(219): Zend_Gdata_App->performHttpRequest('POST', 'http://uploads....', Array, Object(Zend_Gdata_MediaMimeStream), 'multipart/relat...', NULL) #5 /usr/share/php/zend-framework-1.12.13/Zend/Gdata/App.php(908): Zend_Gdata->performHttpRequest('POST', 'http://uploads....', in /usr/share/php/zend-framework-1.12.13/Zend/Http/Header/HeaderValue.php on line 124 Zend_Gdata_MediaMimeStream:L184 public function getContentType() { return 'multipart/related;boundary="' . $this->_boundaryString . '"' . "\r\n"; } ping @weierophinney Hope this gets fixed soon because now it's practically impossible to upload any file. no fix?
gharchive/issue
2015-05-21T20:08:39
2025-04-01T06:46:20.931024
{ "authors": [ "exilod", "froschdesign", "platedodev", "rokclimb15" ], "repo": "zendframework/zf1", "url": "https://github.com/zendframework/zf1/issues/572", "license": "BSD-3-Clause", "license_type": "permissive", "license_source": "github-api" }
869043765
split github jobs so PRs can be tested Move notifications to a separate job, run only on push. Build and test will execute on [pull_request, push]. Use actions-rs/toolchain@v1 to get the rust toolchain. Add matrix hook to allow multiple toolchain versions in the future (now set to [stable]). Run all the cargo tests, not just test_pageserver I've fixed test_wal_acceptor, so you can enable them again in CI Any opinions on this @kelvich? I'm not sure who uses the Telegram notifications or how to test whether they're working.
gharchive/pull-request
2021-04-27T16:45:50
2025-04-01T06:46:20.933657
{ "authors": [ "ericseppanen", "lubennikovaav" ], "repo": "zenithdb/zenith", "url": "https://github.com/zenithdb/zenith/pull/74", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2082539247
Add helper message for zenml up --blocking login Over on #2213 at https://github.com/zenml-io/zenml/issues/2213#issuecomment-1892549278 it was raised that there's no message about what username and password to use when running zenml up --blocking. This would apply to Windows users mainly, so this PR outputs such a message. @coderabbitai review
gharchive/pull-request
2024-01-15T18:51:34
2025-04-01T06:46:20.937013
{ "authors": [ "strickvl" ], "repo": "zenml-io/zenml", "url": "https://github.com/zenml-io/zenml/pull/2290", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
278824139
How to handle images from react-quill(redux) to server ? (node express) Dear People , pls explain how can i handle images from React.Quill textarea with rest. I am a beginner so if someone can explain and write any example i will be very grateful In general, React-Quill is not responsible for that sort of functionality; we recommend using the underlying Quill plugin system instead. There are a few plugins for Quill to handle images: https://www.npmjs.com/search?q=quill-image&page=1&ranking=optimal Also check the Quill issues board: https://github.com/quilljs/quill For help integrating Quill plugins with React-Quill, see the README: https://github.com/zenoamaro/react-quill#custom-formats https://github.com/zenoamaro/react-quill#api
gharchive/issue
2017-12-03T21:52:48
2025-04-01T06:46:20.939898
{ "authors": [ "alexkrolick", "mirik999" ], "repo": "zenoamaro/react-quill", "url": "https://github.com/zenoamaro/react-quill/issues/300", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
1710344169
Implement auth and basic create-read operations The shader body validation is missing (let's discuss that in separate issue), but basic DB structure with parent-child shader relations and stuff is here. Also contains some N+1 problems with children shader authors and stuff, I'll open an issue for batching implementation. But we can already discuss the schema and maybe start figuring out client side that uses it. P.S. Sorry for the commit history, rebase might have been a mistake. Let's squash that. Noice, but revert latest 2 commits, you merged index.html incorrectly @etareduction also made a discussions topic for more convenience https://github.com/zeokku/glsl.app/discussions/4
gharchive/pull-request
2023-05-15T15:33:43
2025-04-01T06:46:20.977343
{ "authors": [ "Lutymane", "etareduction" ], "repo": "zeokku/glsl.app", "url": "https://github.com/zeokku/glsl.app/pull/2", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
618971729
better support for posix api read write in socketpair tests Is your enhancement proposal related to a problem? Please describe. Use READ / WRITE macros in socketpair tests, to abstract read / recv and write / send as in #25356 socketpair should work with unistd read / write when #if !defined(__ZEPHYR__) || defined(CONFIG_POSIX_API) is true Describe the solution you'd like I'll throw together a patch after #25271 Describe alternatives you've considered Additional context IIRC, this should Just Work already - will need to double check.
gharchive/issue
2020-05-15T13:33:59
2025-04-01T06:46:21.004964
{ "authors": [ "cfriedt" ], "repo": "zephyrproject-rtos/zephyr", "url": "https://github.com/zephyrproject-rtos/zephyr/issues/25362", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1140569967
k_cycle_get_32 returns 0 on start-up on native_posix Describe the bug k_cycle_get_32 always returns 0 on start-up. Please also mention any information which could help others to understand the problem you're facing: What target platform are you using? native_posix What have you tried to diagnose or workaround this issue? Ran in gdb and observed hw_models_top.c simu_time==0 To Reproduce Steps to reproduce the behavior: Update test_assert_tests in tests\ztest\base\src\main.c with uint32_t temp = k_cycle_get_32(); zassert_true(temp > 0, "k_cycle_get_32 returned %d", temp); ./scripts/twister -v -c -T tests/ztest/base Observe failures Expected behavior I'd expect a different non-zero value returned each time on start-up Impact It prevents a random shuffling of test cases being executed. This is needed to identify any dependencies between test cases. Logs and console output Assertion failed at WEST_TOPDIR/zephyr/tests/ztest/base/src/main.c:25: framework_tests_test_assert_tests: (temp > 0 is false) k_cycle_get_32 returned 0 Environment (please complete the following information): OS: Linux Toolchain: Zephyr SDK Commit SHA or Version used: https://github.com/zephyrproject-rtos/zephyr/pull/42330 Additional context none Added the medium priority label as this affects tests. @aescolar - would it make sense to add a kconfig option that either provides deterministic or non-deterministic results? This way we can satisfy both needs. @aescolar - would it make sense to add a kconfig option that either provides deterministic or non-deterministic results? This way we can satisfy both needs. @asemjonovs I don't think it would be a too good idea to do that. Meaning, you would not get what you are hoping for. In a regression system you do not want uncontrolled randomness. Say you do have it, and your test fails just 1 out of 1000 times, what can you do? how do you debug the failing instance when re-triggering the failure takes you 1000 trials? The idea is that you can have pseudo-randomness, but it must be well controller (thru a random seed, which you would print out in the test log, so when things fail, you can rerun just that test exactly as when it failed). That is why you can enable pseudo-random behavior but you must provide the seed. In any case, in this particular issue you seem to be after random boot timer counts. But many real embedded platforms will provide you very/fully deterministic boot times. So that is not a source of randomness you should use for any purpose anyhow. (And even if we were to add some more "randomness" to the native_posix target, we would not sprinkle that randomness around in the code) Note the aim of native_posix: https://docs.zephyrproject.org/2.6.0/boards/posix/native_posix/doc/index.html#rationale-for-this-port Closing this due to inactivity, feel free to reopen @asemjonovs if you have additional questions.
gharchive/issue
2022-02-16T20:46:12
2025-04-01T06:46:21.014206
{ "authors": [ "aescolar", "asemjonovs", "carlescufi", "yperess" ], "repo": "zephyrproject-rtos/zephyr", "url": "https://github.com/zephyrproject-rtos/zephyr/issues/42877", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1355212076
acrn_ehl_crb: testcases tests/kernel/smp failed to run on v2.7-branch Describe the bug The test case tests/kernel/smp failed on v2.7-branch on the ACRN board. I tested the case on the main branch, and it works. To Reproduce twister -p acrn_ehl_crb --device-testing --device-serial-pty=$HOME/acrn-env/acrn-test-pty.exp --west-flash=$HOME/acrn-env/acrn_efi.sh -v -T tests/kernel/smp/ -v Logs and console output ACRN:\> DEBUG - DEVICE: ACRN:\>vm_console 0 DEBUG - DEVICE: DEBUG - DEVICE: ----- Entering VM 0 Shell ----- DEBUG - DEVICE: *** Booting Zephyr OS build zephyr-v2.7.3 *** DEBUG - DEVICE: Running test suite smp DEBUG - DEVICE: =================================================================== DEBUG - DEVICE: START - test_smp_coop_threads DEBUG - DEVICE: PASS - test_smp_coop_threads in 0.563 seconds DEBUG - DEVICE: =================================================================== DEBUG - DEVICE: START - test_cpu_id_threads DEBUG - DEVICE: PASS - test_cpu_id_threads in 1.1 seconds DEBUG - DEVICE: =================================================================== DEBUG - DEVICE: START - test_coop_resched_threads DEBUG - DEVICE: PASS - test_coop_resched_threads in 0.301 seconds DEBUG - DEVICE: =================================================================== DEBUG - DEVICE: START - test_preempt_resched_threads DEBUG - DEVICE: PASS - test_preempt_resched_threads in 0.512 seconds DEBUG - DEVICE: =================================================================== DEBUG - DEVICE: START - test_yield_threads DEBUG - DEVICE: PASS - test_yield_threads in 0.301 seconds DEBUG - DEVICE: =================================================================== DEBUG - DEVICE: START - test_sleep_threads DEBUG - DEVICE: PASS - test_sleep_threads in 1.1 seconds DEBUG - DEVICE: =================================================================== DEBUG - DEVICE: START - test_wakeup_threads DEBUG - DEVICE: PASS - test_wakeup_threads in 0.201 seconds DEBUG - DEVICE: =================================================================== DEBUG - DEVICE: START - test_smp_ipi DEBUG - DEVICE: cpu num=2 PASS - test_smp_ipi in 0.301 seconds DEBUG - DEVICE: =================================================================== DEBUG - DEVICE: START - test_get_cpu DEBUG - DEVICE: PASS - test_get_cpu in 0.51 seconds DEBUG - DEVICE: =================================================================== DEBUG - DEVICE: START - test_fatal_on_smp DEBUG - DEVICE: E: RAX: 0x0000000000000003 RBX: 0x0000000000135168 RCX: 0x00000000000f4240 RDX: 0x0000000d00000000 DEBUG - DEVICE: E: RSI: 0x0000000000000200 RDI: 0x00000000000007d0 RBP: 0x00000000001203c0 RSP: 0x00000000001203c0 DEBUG - DEVICE: E: R8: 0x0000000000000001 R9: 0x00000000001388d8 R10: 0x0000000000000000 R11: 0x0000000000000000 DEBUG - DEVICE: E: R12: 0x000000000010f360 R13: 0x0000000000000200 R14: 0x000000000010f360 R15: 0x0000000000000000 DEBUG - DEVICE: E: RSP: 0x00000000001203c0 RFLAGS: 0x0000000000000206 CS: 0x0018 CR3: 0x0000000000140000 DEBUG - DEVICE: E: RIP: 0x00000000001009bb DEBUG - DEVICE: E: >>> ZEPHYR FATAL ERROR 3: Kernel oops on CPU 1 DEBUG - DEVICE: E: Current thread: 0x10e940 (test_fatal_on_smp) DEBUG - DEVICE: PASS - test_fatal_on_smp in 0.5 seconds DEBUG - DEVICE: =================================================================== DEBUG - DEVICE: START - test_workq_on_smp DEBUG - DEVICE: DEBUG - DEVICE: Assertion failed at WEST_TOPDIR/zephyr/tests/kernel/smp/src/main.c:712: test_workq_on_smp: k_work_busy_get(&work) not equal to 0 DEBUG - DEVICE: DEBUG - DEVICE: FAIL - test_workq_on_smp in 0.51 seconds DEBUG - DEVICE: =================================================================== DEBUG - DEVICE: START - test_smp_release_global_lock DEBUG - DEVICE: E: RAX: 0x0000000000000003 RBX: 0x0000000005a995c0 RCX: 0x0000000000000000 RDX: 0x0000000000000000 DEBUG - DEVICE: E: RSI: 0x0000000000000000 RDI: 0x0000000000000000 RBP: 0x000000000011d7f0 RSP: 0x000000000011d7e8 DEBUG - DEVICE: E: R8: 0x0000000000100149 R9: 0x000000000010a862 R10: 0x0000000000000000 R11: 0x0000000000000000 DEBUG - DEVICE: E: R12: 0x000000008b93a39c R13: 0x0000000000000200 R14: 0x000000000010e940 R15: 0x0000000000000000 DEBUG - DEVICE: E: RSP: 0x000000000011d7e8 RFLAGS: 0x0000000000000202 CS: 0x0018 CR3: 0x0000000000140000 DEBUG - DEVICE: E: RIP: 0x0000000000100152 DEBUG - DEVICE: E: >>> ZEPHYR FATAL ERROR 3: Kernel oops on CPU 1 DEBUG - DEVICE: E: Current thread: 0x10d760 (unknown) DEBUG - DEVICE: DEBUG - DEVICE: Assertion failed at WEST_TOPDIR/zephyr/tests/kernel/smp/src/main.c:649: k_sys_fatal_error_handler: (main_thread_id != child_thread_id is false) DEBUG - DEVICE: fatal on the same core DEBUG - DEVICE: FAIL - test_smp_release_global_lock in 0.23 seconds DEBUG - DEVICE: =================================================================== DEBUG - DEVICE: START - test_inc_concurrency DEBUG - DEVICE: type 0: cnt 60000, spend 8 ms DEBUG - DEVICE: type 1: cnt 60000, spend 73 ms DEBUG - DEVICE: type 2: cnt 60000, spend 123 ms DEBUG - DEVICE: PASS - test_inc_concurrency in 0.203 seconds DEBUG - DEVICE: =================================================================== DEBUG - DEVICE: Test suite smp failed. DEBUG - DEVICE: =================================================================== DEBUG - DEVICE: PROJECT EXECUTION FAILED Environment (please complete the following information): OS: Linux Toolchain: Zephyr SDK 14.1 Commit: b9fc341 cc @cfriedt This issue is not stable to reproduce, it's difficult to bisect. Still working on looking for what's commit caused this bug. @Zhaoningx any update? Tests with a randomized value and tests with SMP enabled can sometimes fail due to race conditions and timeouts. Sometimes we need to increase timeouts as a result. In CI, we explicitly retry tests with --retry-failed $N for some N (I usually choose 3, and most times the test pass, but sometimes they can fail once). I would be much more concerned if tests fail in a majority of 10 trials. @Zhaoningx - can you retry with --retry-failed 10? Thank you @cfriedt , sure, run this test case with "--retry-failed 10" it definitely pass, but "--retry-failed 3" it doesn't always pass. So if "--retry-failed 10" is acceptable for daily testing, it could be a solution. CI tries 3 times, for référence. It could be timing related as well, so perhaps increasing some timeout would make the test more reliable https://github.com/zephyrproject-rtos/zephyr/blob/fad899d2ad5e3c5b857ee0b5a331185addcc8deb/.github/workflows/twister.yaml#L136 acrn_ehl_crb use apic_tsc timer, for acrn, I suspect that CONFIG_SYS_CLOCK_TICKS_PER_SEC=10000 is too high, I set it to 1000, 800, 500, the failed test cases passed. acrn_ehl_crb use apic_tsc timer, for acrn, I suspect that CONFIG_SYS_CLOCK_TICKS_PER_SEC=10000 is too high, I set it to 1000, 800, 500, the failed test cases passed. Nice catch @Zhaoningx - do you know if this bug is present or has already been fixed in main? In either case, are you able to make a PR? This bug is not present in main, not sure it is fixed by some patches or by side effect of some patches. After debugging, I found issue happens because at certain point of timing, the hypervisor will have a long delay. (At about uptime at 106 ms, 632 ms, we will get many cycles late when we call k_cycle_get_32/64(). Actually, we found this via the tests/kernel/timer/timer_behavior test, add some debug info we see the clock cycle we got has large drift at this two timing. These points affects some timing sensitive testcases (such as tests/timer/timer_behavior, tests/kernel/smp etc) or even some scheduling tests those measure the time.We are still not totally clear why the hypervisor will have such a delay at these point. It is most likely not an issue of Zephyr, because the clock cycle read from rdtsc provided by hypervisor. Although the failed test are test_workq_on_smp() and test_smp_release_global_lock(), but the failure only have something to do with the timing of the tests executed. A suggested workaround solution is to add a 700ms boot delay to avoid these point. We can add CONFIG_BOOT_DELAY=700 for it. After applying the 700ms DELAY for acrn_ehl_crb, test the SMP test pass in 20 times. Hi, @mengxianglinx , you can verify it, thanks. Yes @enjiamai , I run twister with CONFIG_BOOT_DELAY=700 15 times, the failed case passed.
gharchive/issue
2022-08-30T05:59:54
2025-04-01T06:46:21.025973
{ "authors": [ "Zhaoningx", "cfriedt", "enjiamai", "fabiobaltieri", "mengxianglinx", "mmahadevan108" ], "repo": "zephyrproject-rtos/zephyr", "url": "https://github.com/zephyrproject-rtos/zephyr/issues/49656", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1887587202
64 bit number formatting broken on Picolib Describe the bug When trying to print or format a string that contains 64bit signed numbers, Picolib does not produce correct output, but instead truncates values into 32bit before conversion. To Reproduce Use following code: int main(void) { printf("Hello World! %s\n", CONFIG_BOARD); printf("%" PRId64 "\n", INT64_MAX); printf("%" PRIx64 "\n", INT64_MAX); printf("%" PRId32 "\n", INT32_MAX); printf("%" PRIx32 "\n", INT32_MAX); return 0; } and build it with PICOLIB enabled and in any of the 32bit platforms. I tried qemu_cortex_m3. Output is Hello World! qemu_cortex_m3 -1 ffffffff 2147483647 7fffffff QEMU: Terminated Expected behavior Expected output would be Hello World! qemu_cortex_m3 9223372036854775807 7fffffffffffffff 2147483647 7fffffff When I compile the same code using NewlibC, it produces the correct output. Impact Breaks formatting of big numbers. For example LwM2M plain text encoding. Root causes of issue https://github.com/zephyrproject-rtos/zephyr/issues/62330 This is known, it is due to using the bundled picolibc module, if you build from source then it is not a problem. @keith-packard can probably link the issue I saw it in earlier You can also enable floating point output by selecting CONFIG_PICOLIBC_IO_FLOAT -- that includes 64-bit integer output. We might want to change the SDK picolibc configuration, although adding long long support increases the size of the library by a small amount. @keith-packard Thank you for the information. I tried, and yes it definitely looks like CONFIG_PICOLIBC_IO_FLOAT would fix our number formatting tests. Now, I assume that this problem is known and will not be fixed. But is there an open bug report? How is this documented? I assume that this configuration option will impact on flash size, so developers might consider leaving if off knowing that some clear text formatting corner cases are not working. @keith-packard Thank you for the information. I tried, and yes it definitely looks like CONFIG_PICOLIBC_IO_FLOAT would fix our number formatting tests. Now, I assume that this problem is known and will not be fixed. But is there an open bug report? How is this documented? I assume that this configuration option will impact on flash size, so developers might consider leaving if off knowing that some clear text formatting corner cases are not working. It's all about flash space trade-offs for the default configuration. Are long long formatting requests common enough that the default (SDK provided) library should enable them? Or is the space savings important enough to leave them out? Including long long support in printf costs 840 bytes on cortex m3 targets, most of which is pulling in the 64-bit division code from libgcc (the change in printf is only 32 bytes). If you need for long long printf support for your project, you can switch from using the SDK bundled picolibc version to the picolibc module. That provides finer grained selection over printf features, including the ability to enable long long support without also enabling floating point support. Alternatively, if you feel that the SDK should include long long printf support by default, that's also possible, but would need some discussion to build rough consensus for the change. I did a bit more poking at this to try and provide a bit more data here. As I mentioned above, the reason for eliding long long support is mostly about not using the 64-bit software divide code; the printf code itself grows by a tiny amount (32 bytes). The alternative (and one which picolibc used in the past) was to avoid the 64-bit divide by doing division and modulus by 10 with fancy code. I just re-implemented that with a modest chunk of code which does the division by 10 using reciprocal multiply; the upper 64-bits of a multiply by 0xcccccccccccccccdULL yields an approximation to /10, which can be adjusted and then the remainder computed. That increases the size of vfprintf by 116 bytes, but avoids pulling in __aeabi_uldivmod for a net savings of 632 bytes. That would be good for targets where the only 64-bit division operations were in the bowels of printf. However, a target which does a bunch of 64-bit math and uses __aeabi_uldivmod for its own needs now pays the 116 byte additional cost in vfprintf, without the savings from not linking __aeabi_uldivmod. All of this also presumes we can tell which platforms perform division in software, which is actually pretty hard to detect at build time. I have encountered this problem with my downstream application. All long long prints are truncated to 32-bit even though CONFIG_PICOLIBC_IO_LONG_LONG is selected. If SDK bundled picolibc does not have long long support by default, then we should disable CONFIG_PICOLIBC_IO_LONG_LONG, so it does not confuse users that try to get long long prints working. Of course building picolibc from module works fine, as well as enabling CONFIG_PICOLIBC_IO_FLOAT as suggested above. But still, this is not enough. Possible approaches: Enable long long support by default in Zephyr SDK, since CONFIG_PICOLIBC_IO_LONG_LONG=y is selected by default as well (due to CONFIG_CBPRINTF_FULL_INTEGRAL=y). Select CONFIG_PICOLIBC_IO_FLOAT automatically at Kconfig level whenever CONFIG_PICOLIBC_USE_MODULE=n and CONFIG_PICOLIBC_IO_LONG_LONG=y Change if(CONFIG_PICOLIBC_IO_FLOAT) to if(CONFIG_PICOLIBC_IO_FLOAT OR (NOT CONFIG_PICOLIBC_USE_MODULE AND CONFIG_PICOLIBC_IO_LONG_LONG)) in lib/libc/picolibc/CMakeLists.txt Select PICOLIBC_IO_FLOAT instead of PICOLIBC_IO_LONG_LONG when !PICOLIBC_USE_MODULE. 2-4 are not optimal, since they will pull float/double printf/scanf by default with simplest Zephyr applications (like hello_world). But at least this results in expected (and selected by configuration) behavior, right? Or maybe we don't need to (try to) enable long long support with CBPRINTF_FULL_INTEGRAL and in most cases float variant won't be used? Nevermind above comment, I just read linked "picolibc 1.8.5" which solves this issue.
gharchive/issue
2023-09-08T12:25:55
2025-04-01T06:46:21.039539
{ "authors": [ "SeppoTakalo", "keith-packard", "mniestroj", "nordicjm" ], "repo": "zephyrproject-rtos/zephyr", "url": "https://github.com/zephyrproject-rtos/zephyr/issues/62444", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2075837934
power-domain-gpio must have configurable init priority Current implementation of power-domain-gpio does not have a configurable init priority. It is hardcoded to 75. This makes it incompatible with any device drivers that have a lower priority. The init priority of power-domain-gpio should be configurable via Kconfig @GrantRolls Could you submit a PR for fixing this? @GrantRolls Could you submit a PR for fixing this? Yes, I can try get around to this sometime in the next week I have created a pr adding option to change init priority in all available domains Thanks for that. Closing this regardless because I didn't realise it had already been resolved to some degree with the common config option. I'm running an older version.
gharchive/issue
2024-01-11T06:13:27
2025-04-01T06:46:21.042954
{ "authors": [ "GrantRolls", "ceolin", "henrikbrixandersen" ], "repo": "zephyrproject-rtos/zephyr", "url": "https://github.com/zephyrproject-rtos/zephyr/issues/67467", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
332820030
Bluetooth: tester: LPN Poll issue There are 4 test cases that fail because PTS is not receiving LPN Poll requests. Affected test cases: MESH/NODE/FRND/LPN/BI-02-C, MESH/NODE/FRND/LPN/BV-02-C, MESH/NODE/FRND/LPN/BV-03-C, MESH/NODE/FRND/LPN/BV-06-C Tested on nrf52840_pca10056 One need to try run those tests manually using mesh_shell application to see if the fail is caused by incorrect tester application config. MESH/NODE/FRND/LPN/BV-02-C @MariuszSkamra will this be fixed for 1.13? @jhedberg fyi @MariuszSkamra is this still showing up with recent runs of PTS. @jhedberg should be addressed with Vendor Extension Support. We'll likely to resolve this issue after Vendor Extensions are done; moving to past 1.14 Adding @joerchan to keep him in the loop when it comes to Mesh extensions. @jhedberg there are 3 commands implemented in the legacy LL already, you could start using those in the Host. @jhedberg The mesh_advertise and mesh_advertise_cancel commands are implemented in the legacy LL already, you could start using those in the Host. @carlescufi yes, I could take a look if we can replace the bt_le_adv_start/stop usage with that (at least for a combined build where we know the controller supports the vendor extensions). However, I'd feel uneasy making this never-before used controller functionality the primary method of mesh transmission just a few weeks before LTS, so I think I'd prefer to experiment with this only once we have the release out. @MariuszSkamra is this still an issue? No, this is not valid
gharchive/issue
2018-06-15T15:30:54
2025-04-01T06:46:21.047721
{ "authors": [ "MariuszSkamra", "carlescufi", "galak", "jhedberg", "laperie", "nashif" ], "repo": "zephyrproject-rtos/zephyr", "url": "https://github.com/zephyrproject-rtos/zephyr/issues/8419", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }