id stringlengths 4 10 | text stringlengths 4 2.14M | source stringclasses 2
values | created timestamp[s]date 2001-05-16 21:05:09 2025-01-01 03:38:30 | added stringdate 2025-04-01 04:05:38 2025-04-01 07:14:06 | metadata dict |
|---|---|---|---|---|---|
2631449421 | Prepare for a new rstar release?
We have some additions and changes, and it's been about 9 months since our last release. According to cargo semver-checks we're safe to push a point release (0.12.1).
What do people think?
For fixes and additions, I think we should release as often as we have the capacity to.
Done in https://github.com/georust/rstar/pull/180
| gharchive/issue | 2024-11-03T20:24:06 | 2025-04-01T06:44:18.161434 | {
"authors": [
"adamreichold",
"urschrei"
],
"repo": "georust/rstar",
"url": "https://github.com/georust/rstar/issues/179",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1131260909 | WFS rules are enforced only if removed and recreated manually
After noticing that I was able to edit a layer from QGIS (WFS-T) for which I didn't have edit permissions, I have also noticed that I could perform WFS GetFeature requests on layers for whcih I only had Read permissions.
I've made several tests both on Nexus and locally, taking care to invalidate the Geofence cache, reload Geoserver catalogue and ensuring (both from Postman and from inside a debug session in GeoNode) that I was executing the requests as the test user.
After deleting and recreating the WFS Geofence rules manually they were correctly applied.
In the end the issue was a * rule for REGISTERED_ROLES on everything. The rule was both on my server and Nexus. The rule is not there on master demo.
I don't know where it came from, but I would create a reporting script to check and fix it in case other installs have this rule, which is clearly a huge security hole.
| gharchive/issue | 2022-02-10T23:46:23 | 2025-04-01T06:44:18.196232 | {
"authors": [
"giohappy"
],
"repo": "geosolutions-it/nexus-geonode",
"url": "https://github.com/geosolutions-it/nexus-geonode/issues/577",
"license": "BSD-2-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
2560610117 | Need an easier way to delete wallets
It's difficult to remove a wallet from the list without going 2 levels into the settings, and in one instance I have a wallet that was not connected properly but can't be deleted from the list.
Consider adding a swipe-left function to delete (with a confirmation).
Thanks for reporting, that should be fixed with one of the next releases! :raised_hands:
Fixed with #143
| gharchive/issue | 2024-10-02T03:38:58 | 2025-04-01T06:44:18.225266 | {
"authors": [
"dmnyc",
"reneaaron"
],
"repo": "getAlby/go",
"url": "https://github.com/getAlby/go/issues/148",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1382156974 | feat: check ln address in location
Describe the changes you have made in this PR
Ln Address from location in GitHub
Link this PR to an issue
Closes #1056
Type of change (Remove other not matching type)
feat: New feature (non-breaking change which adds functionality)
How has this been tested?
Manually
Checklist
[x] My code follows the style guidelines of this project and performed a self-review of my own code
[x] New and existing tests pass locally with my changes
[x] I checked if I need to make corresponding changes to the documentation (and made those changes if needed)
Doesn't seem to work:
Tested with my profile. Did it work for you?
| gharchive/pull-request | 2022-09-22T09:47:11 | 2025-04-01T06:44:18.228648 | {
"authors": [
"escapedcat",
"im-adithya"
],
"repo": "getAlby/lightning-browser-extension",
"url": "https://github.com/getAlby/lightning-browser-extension/pull/1503",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
209588838 | java.properties - Comment for default value does not match value set by install
Small change to make the comment relating to clear state level match the value set by the installer.
@jgwilson42 Thanks for the correction.
| gharchive/pull-request | 2017-02-22T21:36:35 | 2025-04-01T06:44:18.235695 | {
"authors": [
"BugDiver",
"jgwilson42"
],
"repo": "getgauge/gauge-java",
"url": "https://github.com/getgauge/gauge-java/pull/81",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
199327011 | Pages are extremely slow to load
I just started with a brand new installation of Grav (w/ admin tools). I'm working on a dev box with EasyPHP on PHP 7.0.4 locally. Functionally, the service is fine so far.
However, I'm getting extremely slow load times from Grav, both on EasyPHP and WampServer. Admin panel pages take 7-8 seconds to load, and content pages take 3-4. Other pages of mine I work with locally take under a second.
Based on the docs, Grav should be built to be highly performant. Are these kind of load times expected? If not, how can I diagnose the issue?
I found the debugging view in Grav, and was able to get a view of where those 3 seconds are going. A lot of those seem to be taking longer than they should -- configuration, for example, is taking over 800ms by itself (200ms when it hits the cache, but the overall time is still over 3sec).
Edit: Added debugging image and tested with WampServer also.
That's definitely not normal. With PHP 7.0 you should be under 50ms (per Grav debugger) no matter your hardware (pretty much). I commonly get 10ms on my 2008 mac pro.
The first thing to check is extra PHP extensions. Things like xdebug can really slow PHP down a lot. The other thing is a slow file system. This could be related to accessing your site from a shared folder, or worse yet, a network folder.
What is your computer setup like? Even in my VMWare virtual machine running windows 10, i get only slightly worse performance than my native mac.
BTW to compare your debugger timeline screenshot, this is a site i'm working on now:
Yeah, that's what I figured. In the debugger docs, it shows the load time as 9ms. (I wish!)
The first thing to check is extra PHP extensions. Things like xdebug can really slow PHP down a lot.
Xdebug hasn't been enabled, I don't think. My php.ini file currently has:
zend_extension=""
xdebug.default_enable=0
xdebug.remote_enable=0
xdebug.remote_autostart = false
xdebug.auto_trace=0
xdebug.profiler_enable=0
And when I run xdebug_* commands in PHP, they fail with a "command not found" error. Not sure which other extensions I should remove, but I will poke around at removing some of them and see what happens!
The other thing is a slow file system. This could be related to accessing your site from a shared folder, or worse yet, a network folder.
I'm not sure how slow would be considered "slow". I think the last benchmark I ran on my HDD had it at around 150 MB/s. Additionally, my test with Wamp (which was somehow ~1sec slower) was on an SSD.
What is your computer setup like? Even in my VMWare virtual machine running windows 10, i get only slightly worse performance than my native mac.
Windows 10 x64, i7 3.6GHz processor, 32 GB memory, GTX 960 SLI for graphics, and as mentioned I've been trying on both an SSD and HDD.
well you certainly don't have a slow machine, could you please try running with the built-in php server?
php -S localhost:8000 router.php
BTW, this only works with latest Grav release.
I had to pull up the router.php file from /system to the project root to get it to run. Once I did so, and accessed localhost:8000, I got this:
I:\Projects\Websites\ekumlin-web>php -S localhost:8000 router.php
Failed loading D:\EasyPHP\Devserver-16.1\eds-binaries\php\php704vc14x86x161220120433\ext\
PHP 7.0.4 Development Server started at Fri Jan 6 22:18:17 2017
Listening on http://localhost:8000
Document root is I:\Projects\Websites\ekumlin-web
Press Ctrl-C to quit.
[Fri Jan 6 22:18:22 2017] ::1:60267 [200]: /
[Fri Jan 6 22:18:22 2017] ::1:60268 [200]: /user/themes/antimatter/css/pure-0.5.0/grids-min.css
[Fri Jan 6 22:18:22 2017] ::1:60269 [200]: /user/themes/antimatter/css-compiled/nucleus.css
[Fri Jan 6 22:18:22 2017] ::1:60270 [200]: /user/themes/antimatter/css-compiled/template.css
[Fri Jan 6 22:18:22 2017] ::1:60271 [200]: /user/themes/antimatter/css/font-awesome.min.css
[Fri Jan 6 22:18:22 2017] ::1:60272 [200]: /vendor/maximebf/debugbar/src/DebugBar/Resources/debugbar.css
[Fri Jan 6 22:18:22 2017] ::1:60273 [200]: /vendor/maximebf/debugbar/src/DebugBar/Resources/widgets.css
[Fri Jan 6 22:18:22 2017] ::1:60274 [200]: /vendor/maximebf/debugbar/src/DebugBar/Resources/openhandler.css
[Fri Jan 6 22:18:22 2017] ::1:60275 [200]: /system/assets/debugger.css
[Fri Jan 6 22:18:22 2017] ::1:60276 [200]: /user/plugins/markdown-notices/assets/notices.css
[Fri Jan 6 22:18:22 2017] ::1:60277 [200]: /user/plugins/login/css/login.css
[Fri Jan 6 22:18:22 2017] ::1:60278 [200]: /user/themes/antimatter/css/slidebars.min.css
[Fri Jan 6 22:18:22 2017] ::1:60279 [200]: /system/assets/jquery/jquery-2.x.min.js
[Fri Jan 6 22:18:22 2017] ::1:60280 [200]: /user/themes/antimatter/js/modernizr.custom.71422.js
[Fri Jan 6 22:18:22 2017] ::1:60281 [200]: /vendor/maximebf/debugbar/src/DebugBar/Resources/debugbar.js
[Fri Jan 6 22:18:22 2017] ::1:60282 [200]: /vendor/maximebf/debugbar/src/DebugBar/Resources/widgets.js
[Fri Jan 6 22:18:22 2017] ::1:60283 [200]: /vendor/maximebf/debugbar/src/DebugBar/Resources/openhandler.js
[Fri Jan 6 22:18:22 2017] ::1:60284 [200]: /user/themes/antimatter/js/antimatter.js
[Fri Jan 6 22:18:22 2017] ::1:60285 [200]: /user/themes/antimatter/js/slidebars.min.js
[Fri Jan 6 22:18:22 2017] ::1:60288 [200]: /user/themes/antimatter/css-compiled/nucleus.css.map
[Fri Jan 6 22:18:22 2017] ::1:60289 [200]: /user/themes/antimatter/css-compiled/template.css.map
[Fri Jan 6 22:18:22 2017] ::1:60290 [200]: /user/themes/antimatter/fonts/fontawesome-webfont.woff2?v=4.6.3
[Fri Jan 6 22:18:22 2017] ::1:60291 [200]: /system/assets/grav.png
Access time was still around 3.7sec.
Some good news! Not great, but good.
I set realpath_cache_size = 16M in php.ini, which reduced the load time to around 1.2sec instead of 3sec+.
Unfortunately I'm thinking, at this point, that there's little related to Grav in the slowness I'm experiencing. Feel free to close the issue if you feel it's appropriate. (Though I'm of course open to more suggestions!)
I'm not sure what other alternatives I have in terms of PHP+Apache server. I left Wamp because it didn't support PHP 7, and EasyPHP did. I also tried using MAMP on my Macbook, but ran into all kinds of other problems with that (like internal server errors with no log messages associated); not sure what setup you have on your MBP.
Having the code on D: doesn't help, oddly enough, though I understand that Windows' file operations can be sluggish. Both drives generally vary from 800ms~1.2s if I don't load the page for a while (i.e. wait a minute or two, then refresh).
What is interesting is that if I refresh the page over and over in quick succession, the time reduces down to ~180ms. It seems kind of like the cache "warms up" and then after some time, shuts down and has to start up again.
Out of curiosity have you tested PHP 5.6, or 7.1??
Just wandering if its a bug on PHP7 on windows.
Oh can you please zip up your current site (whole grav folder) and send me a link to download and test it locally. You never know might be something I can test myself.
Can email me at rhuk@rhuk.net
Yup, done.
Yah, not the Grav config. 180ms for first uncached load, < 20ms every refresh after that.
Huh. This is definitely odd. The fact that it is slow across both my Windows and Mac machines, but not yours.
I also just tried on my prod server, and I get 60~70ms load times. So that's a definite improvement, and probably one that I can live with.
From experience all versions of PHP 7.x.x works equally well with Grav, I've ran through all the released ones. Windows Defender is not the issue, unless you get a flashing message saying "Defender found a bad file". The only way disk read/write speed would be affected by Windows would be if operations had to run across disks/partitions or if a Service or Process had to be accessed during runtime. Other than that file-transfers, reading, and writing runs pretty much like in MS-DOS, leveraged through explorer.exe.
Is your browser (Chrome from your screenshots) up to date? And does it have any peculiar (ie, experimental) settings? In my experience sluggish performance in the browser, seemingly disappearing with fast reloads, is an issue with the browser cache or bad antivirus interfering with it (read: Avast). This is most obvious if the browser uses a lot of CPU-power, memory, and has many running processes (many tabs, extensions loaded on each).
Another problem, common in "easy setup" development environments like Wamp, Mamp, EasyPHP, XAMPP etc. is that their default configurations are miles away from optimal or simple. This usually applies to both the server (Apache, Nginx) and the scripts (PHP). This is why I prefer Caddy: Single binary (~14MB), no included scripts or annoying setups. Thus PHP runs as simple and "pure" as possible, and the environment itself has no hidden hindrances. Starting it is just typing caddy into a console window, and pretty much every relevant setting resides in a single php.ini (well, one for each active version of PHP) that gets run with a basic call within the caddyfile: fastcgi / 127.0.0.1:9000 php (ie. a pure PHP-server controlled by Caddy).
Is your browser (Chrome from your screenshots) up to date?
Yup.
And does it have any peculiar (ie, experimental) settings?
No, it has a couple extensions, but nothing too taxing. Nothing that would affect back-end execution, anyway. I also tried in Internet Explorer (with all default settings, no extensions) and got the same result.
This is why I prefer Caddy: Single binary (~14MB), no included scripts or annoying setups.
Unfortunately, Caddy doesn't work for me. I just get errors when I try to access the page.
07/Jan/2017:15:01:09 -0800 [ERROR 502 /] read tcp 127.0.0.1:5360->127.0.0.1:9000: wsarecv: An existing connection was forcibly closed by the remote host.
Doesn't look like a common problem, though I may open an issue on the Caddy project and see if anyone has any suggestions about that.
Looks like a more common issue when ran within Docker, are you running Caddy locally on Windows or dockerized?
Locally. I'm not using any kind of virtualization.
It's an odd error for sure. Alongside the caddyfile and caddy.exe I have a folder named php with a clean copy of PHP 7.0.7, and I run this in the caddyfile to get Grav up:
# Grav
grav.dev:80 {
root C:\caddy\grav
log logs/access.log
errors logs/error.log
gzip
tls off
fastcgi / 127.0.0.1:9000 php
# Begin - Security
# deny all direct access for these folders
rewrite {
r /(.git|cache|bin|logs|backups|tests)/.*$
to /403
}
# deny running scripts inside core system folders
rewrite {
r /(system|vendor)/.*\.(txt|xml|md|html|yaml|php|pl|py|cgi|twig|sh|bat)$
to /403
}
# deny running scripts inside user folder
rewrite {
r /user/.*\.(txt|md|yaml|php|pl|py|cgi|twig|sh|bat)$
to /403
}
# deny access to specific files in the root folder
rewrite {
r /(LICENSE.txt|composer.lock|composer.json|nginx.conf|web.config|htaccess.txt|\.htaccess)
to /403
}
status 403 /403
## End - Security
# global rewrite should come last.
rewrite {
to {path} {path}/ /index.php?_url={uri}
}
}
If you have time I'm in the Gitter-chat.
For future reference, the above caddyfile needed the following addition:
# Grav
grav.dev:80 {
...
tls off
startup php-cgi -b 127.0.0.1:9000 &
fastcgi / 127.0.0.1:9000 php
...
}
```
Ie. `startup php-cgi -b 127.0.0.1:9000 &` on the line before `fastcgi / 127.0.0.1:9000 php`. And of course, it assumes that `php` is available from the command-line (in Windows system paths/environment variables).
Caddy's a bit faster, but not much.
EasyPHP (Win): ~1.5s cold, 300~800ms warm
Caddy (Win): ~1.5s cold, 300~500ms warm
MAMP (Mac): ~400ms cold, 150~300ms warm
Production server (Linux): ~1.6s cold, ~70ms warm
On my massively slower laptop, with a cleared Grav-cache I clock admin in at 6.06s, and cached at 3.52s. Of course, I run about 8 sites of various size on the same instantiation of PHP within Caddy. If I were to discount Font Awesome, Montserrat Google Font, and run it over HTTP/2 this would probably be lowered to a third of those values. Drop the multiple configurations within Caddy, maybe even 0.5-1s less than that.
At least now you have a cleaner and easier-to-upgrade version of PHP running, with less overhead, though it might be worthwhile to diff the Caddy and Mamp PHP configurations to see what part of Mamp's php.ini leads to the lower times. On production I reckon the biggest improvements would be PHP 7 and HTTP/2.
FWIW I've found that APCu is actually slightly slower than file caching with PHP7. Not sure how that's possible, but something to consider.
Yeah, I think this is as good as it's going to get. I'm fairly sure I'm not using APC, but I'll double-check my config. I'm going to close this issue now, since it seems to be a machine-related issue and not a Grav issue.
For anyone who finds this in the future with a similar issue, my take-aways were:
The issue was with PHP execution, not with connection time (meaning none of the ::1 or hosts hacks worked for me)
Set realpath_cache_size = 16M (not 16k) in php.ini
Use latest PHP (7.1+ right now)
WampServer is very slow, EasyPHP is slightly faster, and Caddy is slightly faster than that (configuring Caddy is a tad tricky; see OleVik's notes above)
Instead of Windows, try Mac or Linux (at least 50% reduction in load times for me), if for no other reason than to see if Windows is holding you back
Drive type (SSD vs HDD) didn't matter much
Thanks for the help, guys!
Having just been through similar troubleshooting (though not as severe), see linked thread, https://blackfire.io/ profiler was helpful to figure it out. By now it works with Windows and PHP 7. Beats guessing. :)
I have similar problem with GRAV+Gantry on Openserver + W7x64. Pages loaded very slow, on any browser.
Problem was gone when i turned Apache+PHP7.1 into x86, before that they was x64.
Just a comment which I found out from some user -- it looks like that checking for non-existing files isn't cached in Windows. It may be one contributing factor on why it is so slow in Win, but fast on every other system.
@mahagr yep, I raised that in https://github.com/getgrav/grav/issues/931#issuecomment-270714008 :)
Oh, there it was! I need to bookmark it. :1st_place_medal:
GreaT,
Just on the subject of web servers try https://laragon.org
realpath_cache_size
Set realpath_cache_size = 16M (not 16k) in php.ini
The same problem here, but Im using WIndows. I use CPanel/ paid shared hosting. Im using PHP8.2, Theres no such value, but, instead, theres max_execution_time that was set to "60", despite the default value is "30" as mentioned there. (The hoster changed it??)
However, I changed this value to "16M". It became faster, but not always.. multiple times it still takes a long time.
I changed the value to "16". It`s always fast now.
Only Grav requires this setting. It`s a fresh softaculous installation.
| gharchive/issue | 2017-01-07T00:55:30 | 2025-04-01T06:44:18.276856 | {
"authors": [
"OleVik",
"Rarst",
"SpawnTree",
"ashraf21c",
"ek68794998",
"ekumlin",
"gooddha",
"mahagr",
"martic",
"rhukster"
],
"repo": "getgrav/grav",
"url": "https://github.com/getgrav/grav/issues/1239",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
1703934228 | Admin dashboard broken after upgrade
I upgraded grav this morning and now the admin panel is broken. The web page still renders. Happy to provide any data to figure out the problem. My installation is running on Centos 7 with an Apache backend and a fairly modern php (8.x? can't check at the moment as I don't have ssh access).
The error page has a lot of information and I can post anything that would be useful. The "headline" message is:
Twig \ Error \ RuntimeError An exception has been thrown during the rendering of a template ("Undefined array key "time"").
My php is 8.0.28. I did the upgrade from #3718 and unfortunately, it is still broken. But unlike others from #3718, my page renders without error. Please let me know what kind of useful information I can supply.
i need more info, can you turn on full errors and give me a screenshot of the page? Need to know what file this is happening in.
I turned on debugging and here is a screenshot.
I'm unable to replicate this. Are you fully up to date with all plugins + latest grav? from CLI do a bin/gpm update -f if possible.
If you can access the filesystem, could you try replacing the existing Admin::lastBackup() method starting on line 1439 in user/plugins/admin/classes/plugin/Admin.php with this:
public function lastBackup()
{
$backup_file = $this->grav['locator']->findResource('log://backup.log');
$file = JsonFile::instance($backup_file);
$content = $file->content() ?? null;
if (!file_exists($backup_file) || is_null($content) || !isset($content['time'])) {
return [
'days' => '∞',
'chart_fill' => 100,
'chart_empty' => 0
];
}
$backup = new \DateTime();
$backup->setTimestamp($content['time']);
$diff = $backup->diff(new \DateTime());
$days = $diff->days;
$chart_fill = $days > 30 ? 100 : round($days / 30 * 100);
return [
'days' => $days,
'chart_fill' => $chart_fill,
'chart_empty' => 100 - $chart_fill
];
}
I could replicate it if i created an empty user/logs/backup.log with no data in it. Removing the backup.log file or having a legit log file was fine.
I'm unable to replicate this. Are you fully up to date with all plugins + latest grav? from CLI do a bin/gpm update -f if possible.
It was fully up-to-date.
If you can access the filesystem, could you try replacing the existing Admin::lastBackup() method starting on line 1439 in user/plugins/admin/classes/plugin/Admin.php with this:
This did the trick.
I could replicate it if i created an empty user/logs/backup.log with no data in it. Removing the backup.log file or having a legit log file was fine.
After reverting the prior change, this also worked.
Many thanks! Upgrades have always worked without problem in the past so this was puzzling.
Admin 1.10.41.2 has been released with this fix.
| gharchive/issue | 2023-05-10T13:24:53 | 2025-04-01T06:44:18.285532 | {
"authors": [
"rhukster",
"zoof"
],
"repo": "getgrav/grav",
"url": "https://github.com/getgrav/grav/issues/3715",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
454974822 | 希望增加快速访问博客功能
一般同步之后,大家可能都会去访问实际线上的站点,确认一下是否同步成功,而每一次,都需要打开浏览器,手输站点。
希望能增加一个快速打开线上站点的功能。
感谢这么好的产品
下个版本更新
已添加,请下载最新版享用
好像更新之后,本地的md文档丢失?
| gharchive/issue | 2019-06-12T01:49:21 | 2025-04-01T06:44:18.287588 | {
"authors": [
"EryouHao",
"UserNameZjw"
],
"repo": "getgridea/gridea",
"url": "https://github.com/getgridea/gridea/issues/147",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
477222977 | feat: markdown样式太丑
1、建议提供多种markdown模板选择
2、列表中没有解析出markdown格式,特别贴的代码在列表就成一行了没有格式
指的是某主题中的 Markdown 样式吧?
若有此需求可自行修改主题;另外,不建议在列表中写块级代码。
| gharchive/issue | 2019-08-06T07:47:48 | 2025-04-01T06:44:18.288859 | {
"authors": [
"EryouHao",
"esky-tech"
],
"repo": "getgridea/gridea",
"url": "https://github.com/getgridea/gridea/issues/239",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1293286119 | UI/UX Desigining
I want to contribute to this open-source world-changing project. and I want to work on UI/UX Designing.
Hi @DevenRathod2 in #2 we are talking about a new UI/UX Design
Yeah. i can help you to design a stunning UI
also @getify can you create a slack workspace for discussion about this project?
Hi, i can help too. also have a uterus 😁
Hey! I am frontend developer and a I also have a uterus 😁. I would love to help because I had to delete my period tracking apps with these laws and I need something which protects my identity and data. (PS. I live in Texas and female reproductive rights have gone down the the drain here).
@aditiranka27 I live in Texas too, a big reason motivating this app!
@getify I agree! A huge motivation to contribute to something which would help all the people affected.
I think #2 is tracking the UI design. Do we need a separate thread (this one) just for UX discussions separate from UI, or can we combine our discussion into that other thread?
I think we can combine the two threads.
Great, everyone here, please move over to #2 and jump in! There's a figma project tracking some early design concepts, and plenty more to discuss and decide. Welcome!
| gharchive/issue | 2022-07-04T15:00:24 | 2025-04-01T06:44:18.297539 | {
"authors": [
"DevenRathod2",
"aditiranka27",
"artmarydotir",
"getify",
"gioboa"
],
"repo": "getify/youperiod.app",
"url": "https://github.com/getify/youperiod.app/issues/11",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
231582909 | Update mssql driver
Update the mssql-jdbc driver to the latest version (6.1.7.jre8-preview).
Checklist
[x] Unit test all changes
[x] Update README.md if applicable
[x] Add [WIP] to the pull request title if it's work in progress
[x] Squash commits that aren't meaningful changes
[x] Run sbt scalariformFormat test:scalariformFormat to make sure that the source files are formatted
@getquill/maintainers
Does this fix issue with failing build?
https://github.com/getquill/quill/pull/787/files
Look at this one and please update contribution.md
@mentegy Let's keep those changes as separate PR. We will merge it as soon as you will reopen it and update description and comment.
@mxl it doesn't, just updating the driver. Still trying to figure what is the build problem.
@mxl works for me
| gharchive/pull-request | 2017-05-26T10:08:49 | 2025-04-01T06:44:18.336832 | {
"authors": [
"juliano",
"mentegy",
"mxl"
],
"repo": "getquill/quill",
"url": "https://github.com/getquill/quill/pull/791",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
271990311 | Object [object Object] has no method 'getName'
Hello. When I call captureExeption from my Meteor node server I get this error, which gets my app to restart.
TypeError: Object [object Object] has no method 'getName'
at HTTPSTransport.HTTPTransport.send (/build/bundle/programs/server/npm/node_modules/raven/lib/transports.js:41:26)
at /build/bundle/programs/server/npm/node_modules/raven/lib/client.js:310:22
at Deflate.onEnd (zlib.js:167:5)
at Deflate.emit (events.js:117:20)
at _stream_readable.js:944:16
at process._tickDomainCallback (node.js:492:13)
I think its when I call with the following command:
Raven.captureException('getResult exeption', {
extra: {
settings: {
resultId: '1234',
tileId: '4567',
timeView: 'year',
timeElement: '2017',
benchmarkId: 1,
questionAlias: '187',
bgVars: [],
tileType: 'tileBarChart',
navigationId: null,
propertyViewId: null,
sessionId: '7890',
},
error: 'Timeout: Request failed to complete in 30000ms',
},
});
It also could be an uncaught Exception though.
Does this make any sense?
Im not sure how to debug this any further :/
Seems like the problem for is uncaught Exceptions, and Raven will force shutdown. Is there a way to avoid this behavior? And it still looks like that the uncaught Exception is from within Raven ( the getName function in transports.js:41
What Node version are you using? getName is a method of http.Agent which comes from Node's standard library. It should always be accessible, unless http was somehow overridden.
I'm running node:0.10.41-slim in a Meteor environment in a Docker container.
Maybe I should look into a http polyfill for old node?
You can just use v2.1.2 instead, it should work just fine. Per our readme:
The last known working version for v0.10 and v0.12 is raven-node@2.1.2. Please use this version if you need to support those releases of Node.
Feel free to reopen if an issue still persists
| gharchive/issue | 2017-11-07T21:29:20 | 2025-04-01T06:44:18.348422 | {
"authors": [
"DesignMonkey",
"kamilogorek"
],
"repo": "getsentry/raven-node",
"url": "https://github.com/getsentry/raven-node/issues/395",
"license": "bsd-2-clause",
"license_type": "permissive",
"license_source": "bigquery"
} |
153804891 | Update lsmod to 1.0.0
This is for issue #152 to handle a null exception if no node modules are loaded (required).
Thanks. :)
| gharchive/pull-request | 2016-05-09T15:35:16 | 2025-04-01T06:44:18.349611 | {
"authors": [
"arabold",
"mattrobenolt"
],
"repo": "getsentry/raven-node",
"url": "https://github.com/getsentry/raven-node/pull/154",
"license": "bsd-2-clause",
"license_type": "permissive",
"license_source": "bigquery"
} |
188841463 | Adding header info to the error message if any
In case we are debugging any strange error this information can be
super useful.
I was finding a lot of errors and the header sometimes adds useful hints.
I don't know if the format of the error is adequate or if you prefer logging in a different fashion.
Thanks a lot!
Huge thumbs up on adding this info, will take a look at the implementation soon.
| gharchive/pull-request | 2016-11-11T20:15:34 | 2025-04-01T06:44:18.351115 | {
"authors": [
"nateberkopec",
"rafadc"
],
"repo": "getsentry/raven-ruby",
"url": "https://github.com/getsentry/raven-ruby/pull/585",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
312465745 | Does not work with Jenkins
OS:
[ ] Windows
[x] MacOS
[ ] Linux
Platform:
[x] iOS
[ ] Android
Output of node -v && npm -v && npm ls --prod --depth=0 (from Jenkins)
+ node -v
v9.8.0
+ npm -v
5.6.0
+ npm ls --prod --depth=0
/Users/[username]/.jenkins/workspace/test
I have following issue:
I have Jenkins installed on my old Mac to automate the building process. I can successfully compile the project through its command line. But when I run the the compilation through Jenkins, I get the error on Build Phase:
Actual result:
❌ error: Can't find '/Users/[username]/.jenkins/workspace/[project path]/node_modules/@sentry/cli/sentry-cli' binary to build React Native bundle
PS:
The file is actually there, if I create a Jenkins task that calls this file, the Jenkins can call it, and sentry-cli returns the help text. Therefore I don't think there's a permission issue between Jenkins and sentry-cli file.
Hey, can you test a simple Jenkins job that does
npm install @sentry/cli
and then calls node node_modules/.bin/sentry-cli to check if that works.
It seems very odd that it does not work.
Sorry that I forgot the close the issue. The real reason was that the path included white spaces and one of the tools (probably xcode) does not escape it by default.
| gharchive/issue | 2018-04-09T09:53:15 | 2025-04-01T06:44:18.355938 | {
"authors": [
"HazAT",
"firatakandere"
],
"repo": "getsentry/react-native-sentry",
"url": "https://github.com/getsentry/react-native-sentry/issues/389",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1928858054 | [Cron Monitoring] DateTimeZone Argument Error
How do you use Sentry?
Sentry SaaS (sentry.io)
SDK version
3.8.1
Steps to reproduce
We recently updated from Sentry version 3.3.3 to 3.8.1, and when we did our Cron monitor started throwing errors. We aren't doing anything fancy, but here's our command:
$schedule->command(LogScheduledTasksRunning::class)->everyMinute()->sentryMonitor();
Expected result
Using the old version of the SDK had no issue running the sentryMonitor() command without issue.
We've found that passing sentryMonitor(updateMonitorConfig: false) fixes the issue.
Actual result
This error is thrown:
TypeError
Sentry\MonitorConfig::__construct(): Argument #4 ($timezone) must be of type ?string, DateTimeZone given, called in /var/www/html/api/vendor/sentry/sentry-laravel/src/Sentry/Laravel/Features/ConsoleIntegration.php on line 97
at vendor/sentry/sentry/src/MonitorConfig.php:29
25▕ * @var string|null The timezone
26▕ */
27▕ private $timezone;
28▕
➜ 29▕ public function __construct(
30▕ MonitorSchedule $schedule,
31▕ ?int $checkinMargin = null,
32▕ ?int $maxRuntime = null,
33▕ ?string $timezone = null
+25 vendor frames
26 artisan:35
Illuminate\Foundation\Console\Kernel::handle()
We released 3.8.2, which fixes this issue.
| gharchive/issue | 2023-10-05T18:45:51 | 2025-04-01T06:44:18.473945 | {
"authors": [
"cleptric",
"cyreb7"
],
"repo": "getsentry/sentry-laravel",
"url": "https://github.com/getsentry/sentry-laravel/issues/782",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
1699639811 | DjangoCache IndexError raised when using keywords approach
How do you use Sentry?
Sentry Saas (sentry.io)
Version
1.22.1
Steps to Reproduce
Install the latest version
Run the code with django cache and get the key using keyword approach
Observe IndexError issue
Snippet:
from djang.core.cache import cache
cache.get(key="my_key") # <-- `IndexError` as there will no `args[0]` which is used for spans
Expected Result
No exception raised and value retrieved
Actual Result
IndexError raised:
IndexError
tuple index out of range
Thanks for reporting this @sshishov !
I will fix this.
| gharchive/issue | 2023-05-08T06:48:40 | 2025-04-01T06:44:18.484198 | {
"authors": [
"antonpirker",
"sshishov"
],
"repo": "getsentry/sentry-python",
"url": "https://github.com/getsentry/sentry-python/issues/2085",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2281169833 | No more last_event_id
How do you use Sentry?
Sentry Saas (sentry.io)
Version
2.1.0
Steps to Reproduce
https://develop.sentry.dev/sdk/features/#retrieve-last-event-id
The method last_event_id does not exist anymore. I understand that's been deprecated but, how are we supposed to show the event_id to the user now?
from sentry_sdk import last_event_id
Expected Result
Be able to get the last event id from a non controlled exception and show it to the final user.
In django for example, the recommended way was this:
from sentry_sdk import last_event_id
from django.shortcuts import render
def custom_handler500(request):
context = {
"request": request,
"title": "Error 500",
"last_event_id": last_event_id(),
}
template_name = "500.html"
return render(request, template_name, context, status=500)
Actual Result
ImportError: cannot import name 'last_event_id' from 'sentry_sdk' (/app/.venv/lib/python3.11/site-packages/sentry_sdk/__init__.py)
Hi @maltalk, thank you for reporting this feedback. We plan to re-add this functionality to the SDK, although we might rename the function
Thanks @szokeasaurusrex for your quick response.
Could you give me a date estimate?
Right now we are frozen on version 1.45 for this reason.
I'm not trying to rush anything at all. We understand that after the total refactor of the SDK it is not something that can be done overnight.
No estimate yet @maltalk – however, since we need the last_event_id functionality for the user feedback dialog, we will likely prioritize adding this back
That's nice. Thank you for your work. When will version 2.2.0 be available?
@maltalk it is done already
| gharchive/issue | 2024-05-06T15:35:18 | 2025-04-01T06:44:18.489962 | {
"authors": [
"maltalk",
"szokeasaurusrex"
],
"repo": "getsentry/sentry-python",
"url": "https://github.com/getsentry/sentry-python/issues/3049",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
375445292 | Include LICENSE in sdist and clarify in README
Used a MANIFEST.in file to include the LICENSE file in PyPI sdist, as required by the BSD license.
Corrected README to say BSD not MIT, so it matches the LICENSE file and setup.py.
Codecov Report
Merging #158 into master will decrease coverage by 1.01%.
The diff coverage is n/a.
@@ Coverage Diff @@
## master #158 +/- ##
==========================================
- Coverage 67.57% 66.56% -1.02%
==========================================
Files 28 27 -1
Lines 2751 2207 -544
Branches 392 377 -15
==========================================
- Hits 1859 1469 -390
+ Misses 740 615 -125
+ Partials 152 123 -29
Impacted Files
Coverage Δ
sentry_sdk/integrations/flask.py
0% <0%> (-70.65%)
:arrow_down:
sentry_sdk/integrations/rq.py
0% <0%> (-68.97%)
:arrow_down:
sentry_sdk/integrations/pyramid.py
0% <0%> (-66.06%)
:arrow_down:
sentry_sdk/integrations/celery.py
0% <0%> (-64.18%)
:arrow_down:
sentry_sdk/_compat.py
52% <0%> (-28.77%)
:arrow_down:
sentry_sdk/integrations/aws_lambda.py
0% <0%> (-12.83%)
:arrow_down:
sentry_sdk/integrations/sanic.py
sentry_sdk/hub.py
79.67% <0%> (+4.02%)
:arrow_up:
sentry_sdk/worker.py
87.34% <0%> (+4.2%)
:arrow_up:
sentry_sdk/integrations/django/__init__.py
77.01% <0%> (+5.35%)
:arrow_up:
... and 12 more
Continue to review full report at Codecov.
Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 1128386...7f9c765. Read the comment docs.
thanks, I will merge as soon as the build is green
@mcs07 do you need a new release for this?
No it's not urgently needed, thanks.
| gharchive/pull-request | 2018-10-30T11:38:46 | 2025-04-01T06:44:18.507521 | {
"authors": [
"codecov-io",
"mcs07",
"untitaker"
],
"repo": "getsentry/sentry-python",
"url": "https://github.com/getsentry/sentry-python/pull/158",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
944595718 | Links open File Explorer instead of browser
This has been a long standing issue that hopefully can be resolved now that Station is open source. Often when clicking links, they open the File Explorer instead of opening in the default browser. I've never had a single link I want to open File Explorer, so hopefully that behavior can be changed.
Is it still an issue?
I'm closing the ticket. It's too old
We will create a new one if it is still an issue.
| gharchive/issue | 2021-07-14T16:20:23 | 2025-04-01T06:44:18.586964 | {
"authors": [
"mtschudi",
"viktor44"
],
"repo": "getstation/desktop-app",
"url": "https://github.com/getstation/desktop-app/issues/8",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
258405563 | [SDK-72]: Cleaned up YotiClientEngine
Cleaned up YotiClientEngine - moved relevant parts to CrypoEngine.cs, and broke up very lengthy method into a set of different methods.
@echarrod Are these changes covered by tests?
Aside from the test I mentioned earlier - where we need to add the extra pieces of profile information, the rest of these changes are covered by tests. I can look at covering those areas next.
| gharchive/pull-request | 2017-09-18T08:44:02 | 2025-04-01T06:44:18.601287 | {
"authors": [
"echarrod",
"vassyz"
],
"repo": "getyoti/.net",
"url": "https://github.com/getyoti/.net/pull/21",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2702674591 | LLMWrapper provides an Commandinterface for the LLM Registry
Ticket: LLM Wrapper Provides a Command Interface for the LLM Registry
User Story
As a user, I want the LLM Wrapper to provide a command interface for the LLM Registry so that the Registry can manage and control the Wrapper’s lifecycle and operations.
Description
The LLM Wrapper must expose a command interface that allows the LLM Registry to interact with it programmatically. This interface will enable the Registry to perform operations such as deploying, stopping, restarting, or shutting down the Wrapper or its LLM. The interface should be secure, scalable, and capable of handling concurrent requests.
Required Commands
Deploy LLM:
Accepts parameters for deploying a specific LLM, such as the LLM type or version.
Returns a deployment status or error.
Stop LLM:
Instructs the Wrapper to stop the currently running LLM.
Returns the completion status of the stop operation.
Wrapper Status:
Provides the current status of the Wrapper (e.g., "Idle," "Ready," "Stopping," "Deploying").
Includes detailed information, such as the deployed LLM, version, and uptime.
Acceptance Criteria
[ ] The LLM Wrapper provides an API or similar interface accessible to the LLM Registry.
[ ] The interface supports the following commands:
[ ] Deploy LLM
[ ] Stop LLM
[ ] Wrapper Status
[ ] Errors during command execution are logged, and appropriate error messages are returned.
[ ] Commands can be executed concurrently without conflict or delays.
Test Criteria
[ ] Functional Tests:
[ ] Test deploying an LLM using the interface and verify deployment success.
[ ] Test stopping operations and verify accurate status updates.
[ ] Query the Wrapper’s status and validate the returned information.
[ ] Error Handling Tests:
[ ] Test scenarios where commands fail mid-execution and validate error logging and response handling.
[ ] Performance Tests:
[ ] Measure the time taken for each command under normal conditions.
[ ] Test the interface’s ability to handle multiple concurrent requests without performance degradation.
[ ] End-to-End Tests:
[ ] Execute the full lifecycle of commands (deploy, stop) and ensure the Wrapper responds appropriately at each step.
Registry sends HTTP requests to the Wrapper:
POST /deploy to deploy a model,
GET /status to check the model's current status,
POST /stop to stop the model.
I'll also be sending the finalized API specification once the details are fully confirmed.
| gharchive/issue | 2024-11-28T16:29:14 | 2025-04-01T06:44:18.624636 | {
"authors": [
"gflachs",
"mouradmhb"
],
"repo": "gflachs/GreenPrompt_LLM_Wrapper",
"url": "https://github.com/gflachs/GreenPrompt_LLM_Wrapper/issues/19",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
184215248 | Alfred gets disconnected and doesnt reconnect by itself.
Happened on an external server.
strace say :
strace -p 25539
Process 25539 attached
select(4, [3], [], [], {15, 312840}) = 0 (Timeout)
clock_gettime(CLOCK_MONOTONIC, {7504257, 407523298}) = 0
open("/usr/share/gotham/modules.d/save/alfred.xxx.fr.save.tmp", O_WRONLY|O_CREAT|O_TRUNC, 0700) = 6
write(6, "{\n\t\"me\":\t{\n\t\t\"features\":\t{\n\t\t\t\"x"..., 5888) = 5888
close(6) = 0
rename("/usr/share/gotham/modules.d/save/alfred.xxx.fr.save.tmp", "/usr/share/gotham/modules.d/save/alfred.xxx.fr.save") = 0
select(4, [3], [], [], {29, 972715}) = 0 (Timeout)
clock_gettime(CLOCK_MONOTONIC, {7504287, 411634697}) = 0
open("/usr/share/gotham/modules.d/save/alfred.xxx.fr.save.tmp", O_WRONLY|O_CREAT|O_TRUNC, 0700) = 6
write(6, "{\n\t\"me\":\t{\n\t\t\"features\":\t{\n\t\t\t\"x"..., 5888) = 5888
close(6) = 0
rename("/usr/share/gotham/modules.d/save/alfred.xxx.fr.save.tmp", "/usr/share/gotham/modules.d/save/alfred.xxx.fr.save") = 0
select(4, [3], [], [], {29, 968603}) = 0 (Timeout)
clock_gettime(CLOCK_MONOTONIC, {7504317, 412073087}) = 0
open("/usr/share/gotham/modules.d/save/alfred.xxx.fr.save.tmp", O_WRONLY|O_CREAT|O_TRUNC, 0700) = 6
write(6, "{\n\t\"me\":\t{\n\t\t\"features\":\t{\n\t\t\t\"x"..., 5888) = 5888
close(6) = 0
rename("/usr/share/gotham/modules.d/save/alfred.xxx.fr.save.tmp", "/usr/share/gotham/modules.d/save/alfred.xxx.fr.save") = 0
select(4, [3], [], [], {4, 176185}) = 0 (Timeout)
clock_gettime(CLOCK_MONOTONIC, {7504321, 593980921}) = 0
write(2, "ERR<25539>: ecore_con.c:694 ecor"..., 52) = 52
write(2, "safety check failed: svr->delete"..., 43) = 43
write(2, "\n", 1) = 1
Last 3 lines tells us a bit the error, but its not enough to understand clearly
netstat tells there isnt any socket connected to the server.
Logs tells me a little more :
grep -v 'svr->delete_me is true' </var/log/alfred | tail -n 10
ERR<25539>:shotgun src/lib/shotgun/login.c:307 shotgun_login() wtf
ERR<25539>:shotgun src/lib/shotgun/login.c:307 shotgun_login() wtf
ERR<25539>:shotgun src/lib/shotgun/login.c:307 shotgun_login() wtf
ERR<25539>:shotgun src/lib/shotgun/login.c:307 shotgun_login() wtf
ERR<25539>:shotgun src/lib/shotgun/login.c:307 shotgun_login() wtf
ERR<25539>:shotgun src/lib/shotgun/login.c:307 shotgun_login() wtf
ERR<25539>:shotgun src/lib/shotgun/login.c:307 shotgun_login() wtf
ERR<25539>:shotgun src/lib/shotgun/login.c:307 shotgun_login() wtf
ERR<25539>:shotgun src/lib/shotgun/login.c:307 shotgun_login() wtf
ERR<25539>:ecore_con ecore_con_dns.c:103 _ecore_con_dns_check() resolve failed: Connection refused
So prosody has been denying login operation to alfred a huge number of time until the DNS server itself returned a connection refused when trying to resolv the IP address of the XMPP server.
This seems to have led the xmpp library to an endless error state.
Debug log :
ERR<28322>:ecore_con ecore_con_dns.c:103 _ecore_con_dns_check() resolve failed: Connection refused
DBG<28322>:ecore_con ecore_con.c:1738 _ecore_con_cb_tcp_connect() KILL 0xfb9050
DBG<28322>:ecore_con ecore_con.c:154 _ecore_con_server_kill() Multi kill request for svr 0xfb9050
So it seems that shotgun doesnt get aware that we had a DNS issue.
A fix should be written in either ecore or shotgun. Keeping this issue opened until fix is done.
And then follows the original error :
ERR<28322>: ecore_con.c:694 ecore_con_server_send() safety check failed: svr->delete_me is true
This is probably the ping from shotgun (sending a single space) that creates the error.
So the bug lies in ecore itself when using dns.c as a dns resolver.
Disabling ipv6 would switch to fork+getaddrinfo, fixing the issue.
So either change dns resolver, either fix the use of dns.c.
When using dns.c, we will jump to this condition : https://github.com/gfriloux/ecore/blob/master/src/lib/ecore_con/ecore_con.c#L1658
if (!net_info) /* error message has already been handled */
{
svr->delete_me = EINA_TRUE;
goto error;
}
Bug the comment is wrong when dns.c is used, cause the error will not have been handled.
Fixed by https://github.com/gfriloux/ecore/commit/f85f19962bdccf4d593da761201e3882bd5697da
It will now be more verbose :
ERR<30266>:ecore_con ecore_con_dns.c:103 _ecore_con_dns_check() resolve failed: Connection refused
ERR<30266>:ecore_con ecore_con.c:1232 _ecore_con_event_server_error() Connection refused
DBG<30266>:ecore_con ecore_con.c:1661 _ecore_con_cb_tcp_connect() HERE
DBG<30266>:ecore_con ecore_con.c:1763 _ecore_con_cb_tcp_connect() KILL 0x750670
DBG<30266>:ecore_con ecore_con.c:154 _ecore_con_server_kill() Multi kill request for svr 0x750670
ERR<30266>:shotgun src/lib/shotgun/shotgun.c:254 error() Connection refused
| gharchive/issue | 2016-10-20T12:25:51 | 2025-04-01T06:44:18.633069 | {
"authors": [
"gfriloux"
],
"repo": "gfriloux/botman",
"url": "https://github.com/gfriloux/botman/issues/24",
"license": "bsd-3-clause",
"license_type": "permissive",
"license_source": "bigquery"
} |
414364344 | [D3D12] E_ACCESSDENIED on CreateSwapChainForHwnd
Short info header:
GFX version: 38c6a9cf3fec8fe78193b27792145245c086ca6e (somewhat recent master)
OS: Windows 10
GPU: Intel® UHD Graphics 620
When resizing the window the swapchain fails to be recreated with an E_ACCESSDENIED on CreateSwapChainForHwnd
error on swapchain creation 0x80070005
This seems to only happen on Intel GPUs.
I'm the one experiencing this bug, so let me know if I can do anything to help!
Thanks for filing the issue! Does it happen for you with the quad example?
If not:
are you initializing from winit or something else?
based on a similar thread it could be that the HWND handle dies. We don't retain it after a surface is created with it.
could you provide a test case (figuring out the differences with the example would help)?
I didn't try fully investigating this yet, but this seems to be related to Game Capture (maybe also other forms of capture) of OBS. Then it also happens on NVidia GPUs and probably every other GPU too, so this isn't specifically an Intel problem.
Could you clarify what "Game Capture" and "OBS" mean here? @CryZe
OBS is Open Broadcast Software, which is used for doing livestreams. https://obsproject.com/
So if you use it for capturing the game using gfx-hal (and the D3D12 backend) (it probably needs to be specifically the "Game Capture" option), then resizing the window crashes it with the error in the opening post.
@CryZe could you try a simple workaround like this - https://github.com/gfx-rs/wgpu/commit/ecaacfa20e0a4f4de6802303ec6b529d822a30bd#diff-85d17e36ebfabf7f5c8cd58cae20a156R1528
I could try this, but probably not till Thursday or Friday.
On Mar 5, 2019, at 9:26 PM, Dzmitry Malyshau notifications@github.com wrote:
@CryZe could you try a simple workaround like this - gfx-rs/wgpu@ecaacfa#diff-85d17e36ebfabf7f5c8cd58cae20a156R1528
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub, or mute the thread.
I already have a wait_idle() there, so that doesn't seem to help.
Skimming a bit through OBS source, I would say this is rather an issue on their side than ours. Nothing stops us from recreating swapchains on the fly but OBS doesn't seem to handle that case. Recreation is currently what's happening in the resize code. Friction between gfx's dx12 backend and OBS will be reduced once we implement support for a ResizeBuffers variant, which is the preferred but more limited way of resizing. In 95% of the cases ResizeBuffers would be fine but doesn't provide full functionality of Vulkans swapchain handling.
Crudly hacked in support for ResizeBuffers and the crash disappeared
| gharchive/issue | 2019-02-25T23:56:08 | 2025-04-01T06:44:18.650605 | {
"authors": [
"CryZe",
"kvark",
"msiglreith",
"steveklabnik"
],
"repo": "gfx-rs/gfx",
"url": "https://github.com/gfx-rs/gfx/issues/2654",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
342149029 | Cut logging in release
Fixes #2237
PR checklist:
[x] make succeeds (on *nix)
[ ] make reftests succeeds
[ ] tested examples with the following backends:
[ ] rustfmt run on changed code
@grovesNL that sounds good.
bors r=grovesNL
| gharchive/pull-request | 2018-07-18T02:05:04 | 2025-04-01T06:44:18.653209 | {
"authors": [
"kvark"
],
"repo": "gfx-rs/gfx",
"url": "https://github.com/gfx-rs/gfx/pull/2246",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1758791735 | ⚠️ Ristorante Gelateria All'Angelo Da Lalo has degraded performance
In efc7243, Ristorante Gelateria All'Angelo Da Lalo ($SITE_RSTRLL) experienced degraded performance:
HTTP code: 200
Response time: 5908 ms
Resolved: Ristorante Gelateria All'Angelo Da Lalo performance has improved in a451580.
| gharchive/issue | 2023-06-15T13:06:57 | 2025-04-01T06:44:18.667863 | {
"authors": [
"ggardin"
],
"repo": "ggardin/uptime-monitor",
"url": "https://github.com/ggardin/uptime-monitor/issues/500",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
53414631 | ag doesn't search directory called lib
I'm getting different results between ack and ag in a particular directory because ag ignores the subfolder called lib. I'm using ag 0.27.0 on OS X.
tdhopper@~/temp $ echo "hello" > file1.txt
tdhopper@~/temp $ mkdir lib
tdhopper@~/temp $ cp file1.txt lib/file2.txt
tdhopper@~/temp $ mkdir dir2
tdhopper@~/temp $ cp file1.txt dir2/file3.txt
tdhopper@~/temp $ ag hello
dir2/file3.txt
1:hello
file1.txt
1:hello
tdhopper@~/temp $ ack hello
dir2/file3.txt
1:hello
file1.txt
1:hello
lib/file2.txt
1:hello
(Perhaps this is documented, but I can't find that anywhere.)
What is the output of ag --debug hello?
tdhopper@~/temp $ ag --debug hello
DEBUG: Found user's home dir: /Users/tdhopper
DEBUG: Skipping ignore file /Users/tdhopper/.agignore
DEBUG: added ignore pattern Updated: 2012-05-30 17:25 to
DEBUG: added ignore pattern *.py[cod] to
DEBUG: added ignore pattern *.so to
DEBUG: added ignore pattern *.egg to
DEBUG: added ignore pattern *.egg-info to
DEBUG: added ignore pattern dist to
DEBUG: added ignore pattern build to
DEBUG: added ignore pattern eggs to
DEBUG: added ignore pattern parts to
DEBUG: added ignore pattern bin to
DEBUG: added ignore pattern var to
DEBUG: added ignore pattern sdist to
DEBUG: added ignore pattern develop-eggs to
DEBUG: added ignore pattern .installed.cfg to
DEBUG: added ignore pattern lib to
DEBUG: added ignore pattern lib64 to
DEBUG: added ignore pattern pip-log.txt to
DEBUG: added ignore pattern .coverage to
DEBUG: added ignore pattern .tox to
DEBUG: added ignore pattern nosetests.xml to
DEBUG: added ignore pattern *.mo to
DEBUG: added ignore pattern .mr.developer.cfg to
DEBUG: added ignore pattern *.acn to
DEBUG: added ignore pattern *.acr to
DEBUG: added ignore pattern *.alg to
DEBUG: added ignore pattern *.aux to
DEBUG: added ignore pattern *.bbl to
DEBUG: added ignore pattern *.blg to
DEBUG: added ignore pattern *.dvi to
DEBUG: added ignore pattern *.fdb_latexmk to
DEBUG: added ignore pattern *.glg to
DEBUG: added ignore pattern *.glo to
DEBUG: added ignore pattern *.gls to
DEBUG: added ignore pattern *.idx to
DEBUG: added ignore pattern *.ilg to
DEBUG: added ignore pattern *.ind to
DEBUG: added ignore pattern *.ist to
DEBUG: added ignore pattern *.lof to
DEBUG: added ignore pattern *.log to
DEBUG: added ignore pattern *.lot to
DEBUG: added ignore pattern *.maf to
DEBUG: added ignore pattern *.mtc to
DEBUG: added ignore pattern *.mtc0 to
DEBUG: added ignore pattern *.nav to
DEBUG: added ignore pattern *.nlo to
DEBUG: added ignore pattern *.out to
DEBUG: added ignore pattern *.pdfsync to
DEBUG: added ignore pattern *.ps to
DEBUG: added ignore pattern *.snm to
DEBUG: added ignore pattern *.synctex.gz to
DEBUG: added ignore pattern *.toc to
DEBUG: added ignore pattern *.vrb to
DEBUG: added ignore pattern *.xdy to
DEBUG: added ignore pattern *.tmproj to
DEBUG: added ignore pattern *.tmproject to
DEBUG: added ignore pattern tmtags to
DEBUG: added ignore pattern .DS_Store to
DEBUG: added ignore pattern ._* to
DEBUG: added ignore pattern .Spotlight-V100 to
DEBUG: added ignore pattern .Trashes to
DEBUG: added ignore pattern .*.sw[a-z] to
DEBUG: added ignore pattern *.un~ to
DEBUG: added ignore pattern Session.vim to
DEBUG: added ignore pattern .* to
DEBUG: added ignore pattern !.gitignore to
DEBUG: added ignore pattern *~ to
DEBUG: added ignore pattern .directory to
DEBUG: Query is hello
DEBUG: PCRE Version: 8.36 2014-09-26
DEBUG: Using 7 workers
DEBUG: DEBUG: DEBUG: DEBUG: Worker 1 startedWorker 0 startedDEBUG: DEBUG: Worker 2 startedDEBUG: Worker 3 startedDEBUG:
Worker 4 startedWorker 5 started
searching path . for hello
Worker 6 started
DEBUG: Skipping ignore file ./.agignore
DEBUG: Skipping ignore file ./.gitignore
DEBUG: Skipping ignore file ./.git/info/exclude
DEBUG: Skipping ignore file ./.hgignore
DEBUG: Skipping svn ignore file ./.svn/dir-prop-base
DEBUG: path_start . filename dir2
DEBUG: temp: /dir2 abs path:
DEBUG: file dir2 not ignored
DEBUG: temp: /dir2/ abs path:
DEBUG: file dir2/ not ignored
DEBUG: temp: /dir2 abs path:
DEBUG: pattern dir2 doesn't match name .DS_Store
DEBUG: pattern dir2 doesn't match name .Spotlight-V100
DEBUG: pattern dir2 doesn't match name .Trashes
DEBUG: pattern dir2 doesn't match name .coverage
DEBUG: pattern dir2 doesn't match name .directory
DEBUG: pattern dir2 doesn't match name .installed.cfg
DEBUG: pattern dir2 doesn't match name .mr.developer.cfg
DEBUG: pattern dir2 doesn't match name .tox
DEBUG: pattern dir2 doesn't match name Session.vim
DEBUG: pattern dir2 doesn't match name Updated: 2012-05-30 17:25
DEBUG: pattern dir2 doesn't match name bin
DEBUG: pattern dir2 doesn't match name build
DEBUG: pattern dir2 doesn't match name develop-eggs
DEBUG: pattern dir2 doesn't match name dist
DEBUG: pattern dir2 doesn't match name eggs
DEBUG: pattern dir2 doesn't match name lib
DEBUG: pattern dir2 doesn't match name lib64
DEBUG: pattern dir2 doesn't match name nosetests.xml
DEBUG: pattern dir2 doesn't match name parts
DEBUG: pattern dir2 doesn't match name pip-log.txt
DEBUG: pattern dir2 doesn't match name sdist
DEBUG: pattern dir2 doesn't match name tmtags
DEBUG: pattern dir2 doesn't match name var
DEBUG: pattern !.gitignore doesn't match file dir2
DEBUG: pattern *.acn doesn't match file dir2
DEBUG: pattern *.acr doesn't match file dir2
DEBUG: pattern *.alg doesn't match file dir2
DEBUG: pattern *.aux doesn't match file dir2
DEBUG: pattern *.bbl doesn't match file dir2
DEBUG: pattern *.blg doesn't match file dir2
DEBUG: pattern *.dvi doesn't match file dir2
DEBUG: pattern *.egg doesn't match file dir2
DEBUG: pattern *.egg-info doesn't match file dir2
DEBUG: pattern *.fdb_latexmk doesn't match file dir2
DEBUG: pattern *.glg doesn't match file dir2
DEBUG: pattern *.glo doesn't match file dir2
DEBUG: pattern *.gls doesn't match file dir2
DEBUG: pattern *.idx doesn't match file dir2
DEBUG: pattern *.ilg doesn't match file dir2
DEBUG: pattern *.ind doesn't match file dir2
DEBUG: pattern *.ist doesn't match file dir2
DEBUG: pattern *.lof doesn't match file dir2
DEBUG: pattern *.log doesn't match file dir2
DEBUG: pattern *.lot doesn't match file dir2
DEBUG: pattern *.maf doesn't match file dir2
DEBUG: pattern *.mo doesn't match file dir2
DEBUG: pattern *.mtc doesn't match file dir2
DEBUG: pattern *.mtc0 doesn't match file dir2
DEBUG: pattern *.nav doesn't match file dir2
DEBUG: pattern *.nlo doesn't match file dir2
DEBUG: pattern *.out doesn't match file dir2
DEBUG: pattern *.pdfsync doesn't match file dir2
DEBUG: pattern *.ps doesn't match file dir2
DEBUG: pattern *.py[cod] doesn't match file dir2
DEBUG: pattern *.snm doesn't match file dir2
DEBUG: pattern *.so doesn't match file dir2
DEBUG: pattern *.synctex.gz doesn't match file dir2
DEBUG: pattern *.tmproj doesn't match file dir2
DEBUG: pattern *.tmproject doesn't match file dir2
DEBUG: pattern *.toc doesn't match file dir2
DEBUG: pattern *.un~ doesn't match file dir2
DEBUG: pattern *.vrb doesn't match file dir2
DEBUG: pattern *.xdy doesn't match file dir2
DEBUG: pattern *~ doesn't match file dir2
DEBUG: pattern .* doesn't match file dir2
DEBUG: pattern .*.sw[a-z] doesn't match file dir2
DEBUG: pattern ._* doesn't match file dir2
DEBUG: file dir2 not ignored
DEBUG: temp: /dir2/ abs path:
DEBUG: pattern dir2/ doesn't match name .DS_Store
DEBUG: pattern dir2/ doesn't match name .Spotlight-V100
DEBUG: pattern dir2/ doesn't match name .Trashes
DEBUG: pattern dir2/ doesn't match name .coverage
DEBUG: pattern dir2/ doesn't match name .directory
DEBUG: pattern dir2/ doesn't match name .installed.cfg
DEBUG: pattern dir2/ doesn't match name .mr.developer.cfg
DEBUG: pattern dir2/ doesn't match name .tox
DEBUG: pattern dir2/ doesn't match name Session.vim
DEBUG: pattern dir2/ doesn't match name Updated: 2012-05-30 17:25
DEBUG: pattern dir2/ doesn't match name bin
DEBUG: pattern dir2/ doesn't match name build
DEBUG: pattern dir2/ doesn't match name develop-eggs
DEBUG: pattern dir2/ doesn't match name dist
DEBUG: pattern dir2/ doesn't match name eggs
DEBUG: pattern dir2/ doesn't match name lib
DEBUG: pattern dir2/ doesn't match name lib64
DEBUG: pattern dir2/ doesn't match name nosetests.xml
DEBUG: pattern dir2/ doesn't match name parts
DEBUG: pattern dir2/ doesn't match name pip-log.txt
DEBUG: pattern dir2/ doesn't match name sdist
DEBUG: pattern dir2/ doesn't match name tmtags
DEBUG: pattern dir2/ doesn't match name var
DEBUG: pattern !.gitignore doesn't match file dir2/
DEBUG: pattern *.acn doesn't match file dir2/
DEBUG: pattern *.acr doesn't match file dir2/
DEBUG: pattern *.alg doesn't match file dir2/
DEBUG: pattern *.aux doesn't match file dir2/
DEBUG: pattern *.bbl doesn't match file dir2/
DEBUG: pattern *.blg doesn't match file dir2/
DEBUG: pattern *.dvi doesn't match file dir2/
DEBUG: pattern *.egg doesn't match file dir2/
DEBUG: pattern *.egg-info doesn't match file dir2/
DEBUG: pattern *.fdb_latexmk doesn't match file dir2/
DEBUG: pattern *.glg doesn't match file dir2/
DEBUG: pattern *.glo doesn't match file dir2/
DEBUG: pattern *.gls doesn't match file dir2/
DEBUG: pattern *.idx doesn't match file dir2/
DEBUG: pattern *.ilg doesn't match file dir2/
DEBUG: pattern *.ind doesn't match file dir2/
DEBUG: pattern *.ist doesn't match file dir2/
DEBUG: pattern *.lof doesn't match file dir2/
DEBUG: pattern *.log doesn't match file dir2/
DEBUG: pattern *.lot doesn't match file dir2/
DEBUG: pattern *.maf doesn't match file dir2/
DEBUG: pattern *.mo doesn't match file dir2/
DEBUG: pattern *.mtc doesn't match file dir2/
DEBUG: pattern *.mtc0 doesn't match file dir2/
DEBUG: pattern *.nav doesn't match file dir2/
DEBUG: pattern *.nlo doesn't match file dir2/
DEBUG: pattern *.out doesn't match file dir2/
DEBUG: pattern *.pdfsync doesn't match file dir2/
DEBUG: pattern *.ps doesn't match file dir2/
DEBUG: pattern *.py[cod] doesn't match file dir2/
DEBUG: pattern *.snm doesn't match file dir2/
DEBUG: pattern *.so doesn't match file dir2/
DEBUG: pattern *.synctex.gz doesn't match file dir2/
DEBUG: pattern *.tmproj doesn't match file dir2/
DEBUG: pattern *.tmproject doesn't match file dir2/
DEBUG: pattern *.toc doesn't match file dir2/
DEBUG: pattern *.un~ doesn't match file dir2/
DEBUG: pattern *.vrb doesn't match file dir2/
DEBUG: pattern *.xdy doesn't match file dir2/
DEBUG: pattern *~ doesn't match file dir2/
DEBUG: pattern .* doesn't match file dir2/
DEBUG: pattern .*.sw[a-z] doesn't match file dir2/
DEBUG: pattern ._* doesn't match file dir2/
DEBUG: file dir2/ not ignored
DEBUG: path_start . filename file1.txt
DEBUG: temp: /file1.txt abs path:
DEBUG: file file1.txt not ignored
DEBUG: temp: /file1.txt abs path:
DEBUG: pattern file1.txt doesn't match name .DS_Store
DEBUG: pattern file1.txt doesn't match name .Spotlight-V100
DEBUG: pattern file1.txt doesn't match name .Trashes
DEBUG: pattern file1.txt doesn't match name .coverage
DEBUG: pattern file1.txt doesn't match name .directory
DEBUG: pattern file1.txt doesn't match name .installed.cfg
DEBUG: pattern file1.txt doesn't match name .mr.developer.cfg
DEBUG: pattern file1.txt doesn't match name .tox
DEBUG: pattern file1.txt doesn't match name Session.vim
DEBUG: pattern file1.txt doesn't match name Updated: 2012-05-30 17:25
DEBUG: pattern file1.txt doesn't match name bin
DEBUG: pattern file1.txt doesn't match name build
DEBUG: pattern file1.txt doesn't match name develop-eggs
DEBUG: pattern file1.txt doesn't match name dist
DEBUG: pattern file1.txt doesn't match name eggs
DEBUG: pattern file1.txt doesn't match name lib
DEBUG: pattern file1.txt doesn't match name lib64
DEBUG: pattern file1.txt doesn't match name nosetests.xml
DEBUG: pattern file1.txt doesn't match name parts
DEBUG: pattern file1.txt doesn't match name pip-log.txt
DEBUG: pattern file1.txt doesn't match name sdist
DEBUG: pattern file1.txt doesn't match name tmtags
DEBUG: pattern file1.txt doesn't match name var
DEBUG: pattern !.gitignore doesn't match file file1.txt
DEBUG: pattern *.acn doesn't match file file1.txt
DEBUG: pattern *.acr doesn't match file file1.txt
DEBUG: pattern *.alg doesn't match file file1.txt
DEBUG: pattern *.aux doesn't match file file1.txt
DEBUG: pattern *.bbl doesn't match file file1.txt
DEBUG: pattern *.blg doesn't match file file1.txt
DEBUG: pattern *.dvi doesn't match file file1.txt
DEBUG: pattern *.egg doesn't match file file1.txt
DEBUG: pattern *.egg-info doesn't match file file1.txt
DEBUG: pattern *.fdb_latexmk doesn't match file file1.txt
DEBUG: pattern *.glg doesn't match file file1.txt
DEBUG: pattern *.glo doesn't match file file1.txt
DEBUG: pattern *.gls doesn't match file file1.txt
DEBUG: pattern *.idx doesn't match file file1.txt
DEBUG: pattern *.ilg doesn't match file file1.txt
DEBUG: pattern *.ind doesn't match file file1.txt
DEBUG: pattern *.ist doesn't match file file1.txt
DEBUG: pattern *.lof doesn't match file file1.txt
DEBUG: pattern *.log doesn't match file file1.txt
DEBUG: pattern *.lot doesn't match file file1.txt
DEBUG: pattern *.maf doesn't match file file1.txt
DEBUG: pattern *.mo doesn't match file file1.txt
DEBUG: pattern *.mtc doesn't match file file1.txt
DEBUG: pattern *.mtc0 doesn't match file file1.txt
DEBUG: pattern *.nav doesn't match file file1.txt
DEBUG: pattern *.nlo doesn't match file file1.txt
DEBUG: pattern *.out doesn't match file file1.txt
DEBUG: pattern *.pdfsync doesn't match file file1.txt
DEBUG: pattern *.ps doesn't match file file1.txt
DEBUG: pattern *.py[cod] doesn't match file file1.txt
DEBUG: pattern *.snm doesn't match file file1.txt
DEBUG: pattern *.so doesn't match file file1.txt
DEBUG: pattern *.synctex.gz doesn't match file file1.txt
DEBUG: pattern *.tmproj doesn't match file file1.txt
DEBUG: pattern *.tmproject doesn't match file file1.txt
DEBUG: pattern *.toc doesn't match file file1.txt
DEBUG: pattern *.un~ doesn't match file file1.txt
DEBUG: pattern *.vrb doesn't match file file1.txt
DEBUG: pattern *.xdy doesn't match file file1.txt
DEBUG: pattern *~ doesn't match file file1.txt
DEBUG: pattern .* doesn't match file file1.txt
DEBUG: pattern .*.sw[a-z] doesn't match file file1.txt
DEBUG: pattern ._* doesn't match file file1.txt
DEBUG: file file1.txt not ignored
DEBUG: path_start . filename lib
DEBUG: temp: /lib abs path:
DEBUG: file lib not ignored
DEBUG: temp: /lib/ abs path:
DEBUG: file lib/ not ignored
DEBUG: file lib ignored because name matches static pattern lib
DEBUG: Searching dir ./dir2
DEBUG: Skipping ignore file ./dir2/.agignore
DEBUG: Skipping ignore file ./dir2/.gitignore
DEBUG: Skipping ignore file ./dir2/.git/info/exclude
DEBUG: Skipping ignore file ./dir2/.hgignore
DEBUG: Skipping svn ignore file ./dir2/.svn/dir-prop-base
DEBUG: path_start ./dir2 filename file3.txt
DEBUG: temp: /dir2/file3.txt abs path: dir2
DEBUG: file file3.txt not ignored
DEBUG: temp: /dir2/file3.txt abs path:
DEBUG: file file3.txt not ignored
DEBUG: temp: /dir2/file3.txt abs path:
DEBUG: pattern dir2/file3.txt doesn't match name .DS_Store
DEBUG: pattern dir2/file3.txt doesn't match name .Spotlight-V100
DEBUG: pattern dir2/file3.txt doesn't match name .Trashes
DEBUG: pattern dir2/file3.txt doesn't match name .coverage
DEBUG: pattern dir2/file3.txt doesn't match name .directory
DEBUG: pattern dir2/file3.txt doesn't match name .installed.cfg
DEBUG: pattern dir2/file3.txt doesn't match name .mr.developer.cfg
DEBUG: pattern dir2/file3.txt doesn't match name .tox
DEBUG: pattern dir2/file3.txt doesn't match name Session.vim
DEBUG: pattern dir2/file3.txt doesn't match name Updated: 2012-05-30 17:25
DEBUG: pattern dir2/file3.txt doesn't match name bin
DEBUG: pattern dir2/file3.txt doesn't match name build
DEBUG: pattern dir2/file3.txt doesn't match name develop-eggs
DEBUG: pattern dir2/file3.txt doesn't match name dist
DEBUG: pattern dir2/file3.txt doesn't match name eggs
DEBUG: pattern dir2/file3.txt doesn't match name lib
DEBUG: pattern dir2/file3.txt doesn't match name lib64
DEBUG: pattern dir2/file3.txt doesn't match name nosetests.xml
DEBUG: pattern dir2/file3.txt doesn't match name parts
DEBUG: pattern dir2/file3.txt doesn't match name pip-log.txt
DEBUG: pattern dir2/file3.txt doesn't match name sdist
DEBUG: pattern dir2/file3.txt doesn't match name tmtags
DEBUG: pattern dir2/file3.txt doesn't match name var
DEBUG: pattern !.gitignore doesn't match file file3.txt
DEBUG: pattern *.acn doesn't match file file3.txt
DEBUG: pattern *.acr doesn't match file file3.txt
DEBUG: pattern *.alg doesn't match file file3.txt
DEBUG: pattern *.aux doesn't match file file3.txt
DEBUG: pattern *.bbl doesn't match file file3.txt
DEBUG: pattern *.blg doesn't match file file3.txt
DEBUG: pattern *.dvi doesn't match file file3.txt
DEBUG: pattern *.egg doesn't match file file3.txt
DEBUG: pattern *.egg-info doesn't match file file3.txt
DEBUG: pattern *.fdb_latexmk doesn't match file file3.txt
DEBUG: pattern *.glg doesn't match file file3.txt
DEBUG: pattern *.glo doesn't match file file3.txt
DEBUG: pattern *.gls doesn't match file file3.txt
DEBUG: pattern *.idx doesn't match file file3.txt
DEBUG: pattern *.ilg doesn't match file file3.txt
DEBUG: pattern *.ind doesn't match file file3.txt
DEBUG: pattern *.ist doesn't match file file3.txt
DEBUG: pattern *.lof doesn't match file file3.txt
DEBUG: pattern *.log doesn't match file file3.txt
DEBUG: pattern *.lot doesn't match file file3.txt
DEBUG: pattern *.maf doesn't match file file3.txt
DEBUG: pattern *.mo doesn't match file file3.txt
DEBUG: pattern *.mtc doesn't match file file3.txt
DEBUG: pattern *.mtc0 doesn't match file file3.txt
DEBUG: pattern *.nav doesn't match file file3.txt
DEBUG: pattern *.nlo doesn't match file file3.txt
DEBUG: pattern *.out doesn't match file file3.txt
DEBUG: pattern *.pdfsync doesn't match file file3.txt
DEBUG: pattern *.ps doesn't match file file3.txt
DEBUG: pattern *.py[cod] doesn't match file file3.txt
DEBUG: pattern *.snm doesn't match file file3.txt
DEBUG: pattern *.so doesn't match file file3.txt
DEBUG: pattern *.synctex.gz doesn't match file file3.txt
DEBUG: pattern *.tmproj doesn't match file file3.txt
DEBUG: pattern *.tmproject doesn't match file file3.txt
DEBUG: pattern *.toc doesn't match file file3.txt
DEBUG: pattern *.un~ doesn't match file file3.txt
DEBUG: pattern *.vrb doesn't match file file3.txt
DEBUG: pattern *.xdy doesn't match file file3.txt
DEBUG: pattern *~ doesn't match file file3.txt
DEBUG: pattern .* doesn't match file file3.txt
DEBUG: pattern .*.sw[a-z] doesn't match file file3.txt
DEBUG: pattern ._* doesn't match file file3.txt
DEBUG: file file3.txt not ignored
DEBUG: ./dir2/file3.txt added to work queue
DEBUG: ./file1.txt added to work queue
DEBUG: DEBUG: Worker 4 finished.Worker 3 finished.DEBUG: DEBUG: DEBUG:
Worker 5 finished.Worker 0 finished.Worker 6 finished.
DEBUG: Match found. File ./dir2/file3.txt, offset 0 bytes.DEBUG:
Match found. File ./file1.txt, offset 0 bytes.
dir2/file3.txt
1:hello
file1.txt
1:DEBUG: Worker 1 finished.h
ello
DEBUG: Worker 2 finished.
Do you have a global .gitignore? It looks like ag is reading that and obeying it.
I do. (I'd forgotten about it.)
Adding because I encountered this same issue: adding -U to the command fixes this behaviour.
| gharchive/issue | 2015-01-05T16:34:56 | 2025-04-01T06:44:18.790216 | {
"authors": [
"conornash",
"ggreer",
"tdhopper"
],
"repo": "ggreer/the_silver_searcher",
"url": "https://github.com/ggreer/the_silver_searcher/issues/571",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
1402635569 | Manifest V3
See:
https://developer.chrome.com/blog/more-mv2-transition/
Specifically:
Starting in January in Chrome 112, Chrome may run experiments to turn off support for Manifest V2 extensions in Canary, Dev, and Beta channels.
Starting in June in Chrome 115, Chrome may run experiments to turn off support for Manifest V2 extensions in all channels, including stable channel.
We also have a few updates on how the phase-out will look on the Chrome Web Store:
In January 2023, use of Manifest V3 will become a prerequisite for the Featured badge as we raise the security bar for extensions we highlight in the store.
In June 2023, the Chrome Web Store will no longer allow Manifest V2 items to be published with visibility set to Public. All existing Manifest V2 items with visibility set to Public at that time will have their visibility changed to Unlisted.
In January 2024, following the expiration of the Manifest V2 enterprise policy, the Chrome Web Store will remove all remaining Manifest V2 items from the store.
Migrate to Manifest V3:
https://developer.chrome.com/docs/extensions/develop/migrate
Extension manifest converter for migrating from v2 to v3:
https://github.com/GoogleChromeLabs/extension-manifest-converter
I made v2 from ground up that supports Manifest V3.
https://github.com/gh640/chrome-extension-extension-switch-v2
| gharchive/issue | 2022-10-10T05:58:23 | 2025-04-01T06:44:18.815462 | {
"authors": [
"gh640"
],
"repo": "gh640/chrome-extension-extension-switch",
"url": "https://github.com/gh640/chrome-extension-extension-switch/issues/19",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
105907938 | Matching validation issues
I created the following issues using the live demo:
Issue 1:
Enter a password into the password field.
Enter an identical password into the confirmation field.
Before submit, go back and change the password field.
Passwords validate even though they do not match anymore.
Issue 2:
Remove the 'required' validation condition of the password and confirmation inputs.
Enter a password into the password field.
Passwords validate even though the confirmation is blank.
Yes I can see the behavior since only changing the value on the confirmation input will trigger a validation action, basically there is no watch on the parent input (which is why the input stays valid after changing your password). I'm not crazy on having Angular-Validation to add multiple watch, but I don't think there is any other way to do it.
So I will see if I can add a $watch on the parent input (like the password field for example). That would probably fix both your issues when it becomes available.
Added a $watch on the parent field (for example Password) so that the matching input gets notified (example Password Confirmation).
Also note that it affects all matching validators, that is:
match: n,
different: n
Both your issues should now be fixed, if not let me know
Thanks for reporting it
| gharchive/issue | 2015-09-10T22:05:06 | 2025-04-01T06:44:18.849905 | {
"authors": [
"aaronbc",
"ghiscoding"
],
"repo": "ghiscoding/angular-validation",
"url": "https://github.com/ghiscoding/angular-validation/issues/68",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
550203688 | [Feature] networkmgr enterprise wlan support
For a desktop OS I really need good enterprise WLAN support, with radius/certificates etc.
Can you provide specific details and use cases?
Related to this https://issues.ghostbsd.org/issues/142
What I need is a NetworkManager with everything ;-)
Connecting WLAN
ON/OFF, WEP, WPA1/2, Enterprise (Radius/Certificate), WPS
Connecting LAN
ON/OFF, Enterprise (Radius)
Configure IP- Addresses
IPv4/IPv6, Router, additional routes, DHCP..
Configure DNS
DNS IP, Search Domains, Hostname, Hostdomain
Configure VPN
Wireguard, OpenVPN, IPsec/L2TP/IKEv2
In this case, I need the opportunity to connect to a company or university network. Without having to use "vi" or "wpa_gui".
Wifi with radius registration would be a real win here. Do you need configuration examples or an explanation for "Enterprise" WLAN?
network={ ssid="univ-wifi" proto=RSN key_mgmt=WPA-EAP pairwise=CCMP auth_alg=OPEN eap=PEAP identity="user" password="pass" }
From gmarco
add support WPA2-EAP to network manager
Hi,
please add the support for WPA2-EAP networks in network manager (wifi).
So we can manage network like this one:
network={
ssid="MyCompany"
scan_ssid=0
key_mgmt=WPA-EAP
identity="my_domain\my_name"
password="my_beautifulPWD!"
priority=5
}
Thanks.
What I need is a NetworkManager with everything ;-)
* Connecting WLAN
ON/OFF, WEP, WPA1/2, Enterprise (Radius/Certificate), WPS
* Connecting LAN
ON/OFF, Enterprise (Radius)
* Configure IP- Addresses
IPv4/IPv6, Router, additional routes, DHCP..
* Configure DNS
DNS IP, Search Domains, Hostname, Hostdomain
* Configure VPN
Wireguard, OpenVPN, IPsec/L2TP/IKEv2
In this case, I need the opportunity to connect to a company or university network. Without having to use "vi" or "wpa_gui".
Wifi with radius registration would be a real win here. Do you need configuration examples or an explanation for "Enterprise" WLAN?
network={ ssid="univ-wifi" proto=RSN key_mgmt=WPA-EAP pairwise=CCMP auth_alg=OPEN eap=PEAP identity="user" password="pass" }
I would probably need ifconfig WLAN can example at least what is after CAPS.
I need an ifconfig wlan0 scan example to add this feature.
| gharchive/issue | 2020-01-15T14:00:19 | 2025-04-01T06:44:18.871510 | {
"authors": [
"Kernel-Error",
"ericbsd",
"vimanuelt"
],
"repo": "ghostbsd/networkmgr",
"url": "https://github.com/ghostbsd/networkmgr/issues/24",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
233208400 | "The fit failed to converge." messages do not leave useful information
It is pretty hard to trace the issue that lead to the "The fit failed to converge." message.
Is it possible to report the best parameters achieved, some additional information about the explored parameter range?
I understand that it can be done for example with get_contours afterwards, but if exception raises somewhere deep the state of the objects may not allow that.
What I did for myself is replace these exceptions with warnings. It should be clearly recorded too in the state that the parameters are not the best fit.
I am not certain this is the best solution though.
What do you think?
I added this. If the fit fails, the minimizer will try to print some useful information about the last iteration and the reason for failure.
| gharchive/issue | 2017-06-02T14:55:14 | 2025-04-01T06:44:18.915251 | {
"authors": [
"giacomov",
"volodymyrss"
],
"repo": "giacomov/3ML",
"url": "https://github.com/giacomov/3ML/issues/210",
"license": "bsd-3-clause",
"license_type": "permissive",
"license_source": "bigquery"
} |
163471454 | Add more tests for @keyframes
As discussed in https://github.com/giakki/uncss/pull/245.
I also added some instructions to /tests/selectors/index.html, advising of the things I got bitten with when adding these tests:
Placing any <div> elements on this page other than
div#battleground will break the "not" test!
Any changes made here must be reflected in
/tests/output/selectors/index.html; otherwise, the
"Pages should resemble the reference:Selectors" test
will fail!
CC: @mikelambert
Coverage increased (+0.4%) to 96.68% when pulling 624006c64fd88d42c238fe7ec29176908e340870 on RyanZim:tests into 558312b0c234a73add85b242f07de005f2f98af3 on giakki:master.
Only (+0.4%), but every bit helps!
| gharchive/pull-request | 2016-07-01T20:46:37 | 2025-04-01T06:44:18.919749 | {
"authors": [
"RyanZim",
"coveralls"
],
"repo": "giakki/uncss",
"url": "https://github.com/giakki/uncss/pull/248",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
440880150 | Agregar dirección de venta itinerante
En el caso de venta de bienes, se deberá indicar la dirección de la entrega de dichos
bienes siempre que: Se trate de ventas itinerantes y no figure el punto de llegada en la
guía de remisión – remitente que realice el traslado de los bienes
ejemplo:
<cac:DeliveryTerms>
<cac:DeliveryLocation >
<cac:Address>
<cbc:StreetName>CALLE NEGOCIOS # 420</cbc:StreetName>
<cbc:CitySubdivisionName/>
<cbc:CityName>LIMA</cbc:CityName>
<cbc:CountrySubentity>LIMA</cbc:CountrySubentity>
<cbc:CountrySubentityCode>150141</cbc:CountrySubentityCode>
<cbc:District>SURQUILLO</cbc:District>
<cac:Country>
<cbc:IdentificationCode listID="ISO 3166-1" listAgencyName="United Nations Economic
Commission for Europe" listName="Country">PE</cbc:IdentificationCode>
</cac:Country>
</cac:Address>
</cac:DeliveryLocation >
</cac:DeliveryTerms>
Esto seria como una factura guía?
Se agrego dirección de entrega en https://github.com/giansalex/greenter-xml/commit/4f2c64c7e01651af763564d0472c5b9ba95ae0a3
| gharchive/issue | 2019-05-06T20:47:57 | 2025-04-01T06:44:18.922752 | {
"authors": [
"danielbq0802",
"giansalex"
],
"repo": "giansalex/greenter",
"url": "https://github.com/giansalex/greenter/issues/72",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
480149697 | Allow pods to run as root
Towards giantswarm/giantswarm/issues/6705
Out of curiosity - do you know why it needs root? Can we approach partners to change it?
Having looked around the images, I can't see any reason why it needs to be root.
Correct PSP isn't being applied even though CR/CRB/SA allow it:
$ kubectl auth can-i use psp/kong-kong-app-psp --as=system:serviceaccount:giantswarm:kong-app
yes
$ kubectl describe po/kong-kong-app-7d569c6cb4-gwtfl | grep psp
Annotations: kubernetes.io/psp: restricted
the service account you are using is for the controller and not for the app, check the templates there's 2 deployments and the app one seems to have some weird conditional
I still can't get this working on 8.3.0 as kubernetes is applying the restricted PSP.
The PSP should allow it:
$ kg psp/test-kong-app-psp
NAME PRIV CAPS SELINUX RUNASUSER FSGROUP SUPGROUP READONLYROOTFS VOLUMES
test-kong-app-psp false RunAsAny RunAsAny RunAsAny RunAsAny false secret
$ kg clusterrole/test-kong-app-psp -o yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: test-kong-app-psp
rules:
- apiGroups:
- extensions
resourceNames:
- test-kong-app-psp
resources:
- podsecuritypolicies
verbs:
- use
$ kg clusterrolebinding/test-kong-app-psp -o yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: test-kong-app-psp
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: test-kong-app-psp
subjects:
- kind: ServiceAccount
name: test-kong-app
namespace: kong
$ kg deploy/test-kong-app -o yaml | grep serviceAccount
serviceAccount: test-kong-app
serviceAccountName: test-kong-app
Yet still we get the following:
$ kubectl get events | grep Error
21m Warning Failed pod/test-kong-app-55464c7677-rcqzp Error: container has runAsNonRoot and image will run as root
Maybe a fresh pair of eyes from @giantswarm/team-batman might spot what I'm missing, as clearly something isn't quite right. Hopefully someone can spot my mistake!
After experimenting some more, I went ahead and deleted the restricted PSP which solved the problem - my PSP was then used. I'm at a bit of a loss as to why my PSP isn't being used as it clearly matches the pod's requirements (and restricted does not).
@glitchcrab Damn, well good to know it is the restricted PSP that is blocking it.
My PSP knowledge is low so I can't help much here :( This isn't blocking yet so it can wait a bit. Maybe @puja108 or @corest can help when they are free?
Maybe we need to recheck how PSP is applied nowadays in Kubernetes. Back when I implemented our bootstrap you needed a minimal PSP for restricting things, by now maybe not having a restricted would already put the default at the minimum? That said, having a restricted as default should not overwrite your custom PSP attached to the SA that the Deployment uses.There‘s something fishy there
Got there eventually:
[shw helm](☸ giantswarm-hk23x:kong)(fix-psp-issues)$ kg po/kong-kong-app-7cc49ddc7d-z8n64 -o yaml | grep psp
kubernetes.io/psp: kong-kong-app-psp
[shw helm](☸ giantswarm-hk23x:kong)(fix-psp-issues)$ ~/.local/bin/helm ls --tiller-namespace giantswarm | grep kong
kong 1 Thu Sep 26 17:20:55 2019 DEPLOYED kong-app-TESTY 1.2.2 kong
:tada:
I've re-requested a review as it's been a while since I touched this - just to get some eyes on it to make sure we're happy with it.
| gharchive/pull-request | 2019-08-13T13:13:05 | 2025-04-01T06:44:18.937096 | {
"authors": [
"corest",
"glitchcrab",
"puja108",
"rossf7"
],
"repo": "giantswarm/kong-app",
"url": "https://github.com/giantswarm/kong-app/pull/11",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1672627860 | Adjust Loki requests/limits
Current Loki requests & limits are order of magnitude higher than what is actually used (GB requested vs MB used) , see https://github.com/giantswarm/roadmap/issues/2182
Make sure we adjust those to reasonable numbers, using VPA might be a solution here.
Check current Loki app installed by customer, e.g
goku / x6rtd
Looking at Loki values:
gateway is 100% stateless, and has an autoscaling option for HPA.
read is 100% stateless when the backend deployment is enabled, and has an autoscaling option for HPA.
write and backend are sts, so they should be adjusted by VPA. VPA is not supported upstream, but we can add an optional VPA resource in our chart.
multi-tenant-proxy is not upstream at all. It is stateless, so we could probably should add an autoscaling option like for gateway and read.
There's a great PR coming to the charts that enables HPA for all targets: https://github.com/grafana/loki/pull/8684
Resources on Gorilla/zj88t:
$ k top pods -n loki
NAME CPU(cores) MEMORY(bytes)
loki-gateway-576ffdb54-8tc6k 3m 11Mi
loki-gateway-576ffdb54-chjzj 2m 11Mi
loki-gateway-576ffdb54-hnh4n 3m 10Mi
loki-multi-tenant-proxy-55b55fb654-4hxk2 6m 18Mi
loki-multi-tenant-proxy-55b55fb654-js26b 6m 20Mi
loki-multi-tenant-proxy-55b55fb654-kgbsl 6m 20Mi
loki-read-0 4m 890Mi
loki-read-1 3m 1245Mi
loki-read-2 3m 1165Mi
loki-write-0 686m 1030Mi
loki-write-1 167m 1904Mi
loki-write-2 726m 1075Mi
Resources on Goku/x6rtd:
NAME CPU(cores) MEMORY(bytes)
loki-app-gateway-85fddcd979-mpldh 7m 11Mi
loki-app-gateway-85fddcd979-r5kth 6m 11Mi
loki-app-read-0 11m 270Mi
loki-app-read-1 11m 255Mi
loki-app-read-2 13m 293Mi
loki-app-read-3 11m 292Mi
loki-app-write-0 91m 2886Mi
loki-app-write-1 43m 1042Mi
loki-app-write-2 52m 1891Mi
loki-app-write-3 61m 2662Mi
loki-app-write-4 75m 2128Mi
| gharchive/issue | 2023-04-18T08:34:59 | 2025-04-01T06:44:18.944259 | {
"authors": [
"TheoBrigitte",
"hervenicol"
],
"repo": "giantswarm/roadmap",
"url": "https://github.com/giantswarm/roadmap/issues/2365",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1736393322 | Cluster name to be derived from App resource name, not from chart value
Some cluster apps (CAPA, CAPZ) have a values property for setting the cluster name, which is technically a repetition.
CAPVCD and CAPV use the App's name as the cluster name. Supposedly this has been decided as the preferred way in KaaS sync in August 2022.
Currently our web UI only deals with the first way.
We should also check kubectl gs template cluster for the current implementation, per provider.
Tasks
Adapt CAPA and CAPZ cluster apps to remove .metadata.name and use Release.Name instead in templates
Adapt web UI to set the cluster name via the App CR name. Including strategy for backward-compatibility.
### Tasks
- [ ] Adapt CAPA cluster app to remove .metadata.name and use Release.Name instead in templates
- [ ] Adapt CAPZ cluster app to remove .metadata.name and use Release.Name instead in templates
- [ ] Adapt web UI to set the cluster name via the App CR name, including strategy for backward-compatibility
@marians is this happa related and can be closed?
Don't know. I asked in area-kaas.
Jose confirmed this is still a thing. Now I wonder about the impact. I think this info is missing here. @gusevda can you explain this?
| gharchive/issue | 2023-06-01T13:58:30 | 2025-04-01T06:44:18.949165 | {
"authors": [
"marians",
"weatherhog"
],
"repo": "giantswarm/roadmap",
"url": "https://github.com/giantswarm/roadmap/issues/2526",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1043235875 | Preview Release
User Story
- As a cluster admin, I want to see a warning in Happa for a preview release which explains the expectations for it.
Details, Background
We plan on making CAPI preview releases for AWS and Azure available to our customer in order to get feedback from them. One small improvement, which would have a big impact on setting proper expectations for such a release, is to display a warning to our customers
Changes
a Giant Swarm release can be marked as preview and as an explanation of its purpose;
Happa displays the explanation as a warning to our customers.
preview is now a valid state on the Release CR and the CRD has been deployed to all management clusters.
Handing over to @giantswarm/team-rainbow now for the UI side. 😃
In contrast to https://github.com/giantswarm/roadmap/issues/539, I understand that preview releases as defined here should be visible to all users who have access to releases in general, correct?
Regarding upgrades:
Am I right in assuming that we do not want to motivate users to upgrade their clusters to a preview release? Currently we show a warning icon and "Upgrade available" in the web UI when a newer release is available.
Do we support upgrading to a preview release via the UIs, or shall that be suppressed?
Preview releases should be available to everyone (us & our customers).
With our current CAPI preview releases we don't want customers to upgrade to them because only cluster creation is supported at the moment. Upgrade functionality should be suppressed.
@alex-dabija can you please suggest the message for a warning we want to display?
The message should be taken from .spec.notice this way we can adapt the message based on the content and purpose of the release.
@alex-dabija As of now, the Rest API will not return wip nor preview releases. Is that fine?
Does Happa use Rest API to get the list of releases? If yes, it's not fine if the preview releases are not returned because then this story doesn't make sense.
There is little impact on not returning wip release because those are meant for us and we can create cluster with kubectl-gs.
How do you propose to proceed with the preview releases to be returned by the Rest API? Should we do it in Phoenix?
I had to revert my comment. In fact the Rest API will return both preview and WIP releases, but they will be marked as "inactive" like all other releases that don't have status active.
The Rest API is still in use with happa and gsctl where customers have not yet moved to SSO.
We do no display anything correctly on Happa as at yet, but we create the cluster in kubectl-gs and write the documentation for the customers.
Notes:
Next week we will know if PoC on CAPI will be successful.
UI's on AWS for CAPI will be ReadOnly
Suggestion for message to Display:
Cluster (and MachinePool) is visible on Happa, and message saying "This CAPI Cluster and this version is not supported"
Show the Cluster in the list and block access to the detail
New Name for NodePool in CAPI Clusters is called Machine Pool
Web UI Spec
Cluster list (front page)
Initially, CAPI clusters will not be supported in the web UI. Users will not be able to navigate to a detail page for a CAPI cluster. So the list item representing such a cluster will look slightly different. And using a preview release will be visible on the element, using a yellow "PREVIEW" badge. No CPU/RAM details will be shown, and no Get started button.
Cluster details
At some point, once a cluster details page is offered for CAPI pages, the fact that the cluster uses a preview release should be indicated in the Release panel on the Overview tab.
If there is not .spec.notice set on the release CR:
With a `.spec.notice set on te release CR:
Here, the additional text shown represents the entire content of the notice attribute.
Cluster creation
If the user is allowed to select a preview release on cluster creation (initially only allowed for Giant Swarm staff/admins):
Release selection showing preview, WIP and deprecated releases marked.
CLI Spec
Current output of kubectl gs get releases:
VERSION STATUS AGE KUBERNETES CONTAINER LINUX COREDNS CALICO
v15.0.0 active 77d 1.20.6 2605.12.0 1.8.0 3.15.3
v15.0.1 active 53d 1.20.11 2605.12.0 1.8.0 3.15.3
v15.1.2 inactive 54d 1.20.11 2765.2.6 1.8.0 3.15.3
v15.1.3 active 21d 1.20.12 2905.2.5 1.8.0 3.15.3
v16.0.0 inactive 77d 1.21.4 2905.2.3 1.8.3 3.15.3
v16.0.1 inactive 55d 1.21.5 2905.2.3 1.8.3 3.15.3
v16.0.2 active 22d 1.21.6 2905.2.5 1.8.3 3.15.3
Modified STATUS column:
VERSION STATUS AGE KUBERNETES CONTAINER LINUX COREDNS CALICO
v15.0.0 ACTIVE 77d 1.20.6 2605.12.0 1.8.0 3.15.3
v15.0.1 ACTIVE 53d 1.20.11 2605.12.0 1.8.0 3.15.3
v15.1.2 DEPRECATED 54d 1.20.11 2765.2.6 1.8.0 3.15.3
v15.1.3 ACTIVE 21d 1.20.12 2905.2.5 1.8.0 3.15.3
v16.0.0 DEPRECATED 77d 1.21.4 2905.2.3 1.8.3 3.15.3
v16.0.1 DEPRECATED 55d 1.21.5 2905.2.3 1.8.3 3.15.3
v16.0.3 WIP 22d 1.21.6 2905.2.5 1.8.3 3.15.3
v20.0.0-alpha1 PREVIEW 22d 1.21.6 2905.2.5 1.8.3 3.15.3
If the user is allowed to select a preview release on cluster creation (initially only allowed for Giant Swarm staff/admins)
I think while we should show the preview release in the release list for Giant Swarms staff/admins (as we show all the available releases), we should also be careful with managing expectations there. Cluster creation for that release is not currently supported from the web UI, and creating one with it would not result in a functional cluster.
I think we should not display the radio button for the preview release for the time being, and include a note (only visible to staff) at the end of the release list, explaining why it is not possible to create a cluster with that release.
I think we should not display the radio button for the preview release for the time being, and include a note (only visible to staff) at the end of the release list, explaining why it is not possible to create a cluster with that release.
Good thought! +1
Updated Web UI Specs
Cluster List
Cluster Creation > Release Selector
Nice UI, looks great.
| gharchive/issue | 2021-11-03T08:58:31 | 2025-04-01T06:44:18.966733 | {
"authors": [
"AverageMarcus",
"PrashantR31",
"alex-dabija",
"kuosandys",
"marians",
"snizhana-dynnyk"
],
"repo": "giantswarm/roadmap",
"url": "https://github.com/giantswarm/roadmap/issues/538",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1826472078 | Can you select multiple games and activate all achievements?
Can you select multiple games and activate all achievements?
thanks in advance
it's my same questions... i dont think it's exist something like that, cuz i have a lot of games and i wanna do all achievements with one click, but u can't, u have to take one by one...
| gharchive/issue | 2023-07-28T13:52:15 | 2025-04-01T06:44:18.976471 | {
"authors": [
"Sharpyku",
"gzuzz187"
],
"repo": "gibbed/SteamAchievementManager",
"url": "https://github.com/gibbed/SteamAchievementManager/issues/352",
"license": "Zlib",
"license_type": "permissive",
"license_source": "github-api"
} |
218599710 | One suggestion
A general informative of all accounts, with current balance, negative, etc., same as the version of Ionic 1.
If you have a mockup or a more detailed view of what you're suggesting that would help. But I agree, I need to add a more informative screen view with all the accounts and info.
Of course, I'm going to look for some app that I have, in case I do not think I'll do a layout in Photoshop.
Thank you.
| gharchive/issue | 2017-03-31T19:47:49 | 2025-04-01T06:44:18.990947 | {
"authors": [
"gigocabrera",
"jowbjj"
],
"repo": "gigocabrera/MoneyLeash2",
"url": "https://github.com/gigocabrera/MoneyLeash2/issues/11",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
2098727840 | Please add support for Gap property
It seems it is supported by the Yoga librarary
Nice tip, gap is super useful!
This was implemented by #17 and will be available in the next version to be released.
| gharchive/issue | 2024-01-24T17:13:08 | 2025-04-01T06:44:19.028374 | {
"authors": [
"dimozaprianov",
"gilzoide"
],
"repo": "gilzoide/unity-flex-ui",
"url": "https://github.com/gilzoide/unity-flex-ui/issues/5",
"license": "Unlicense",
"license_type": "permissive",
"license_source": "github-api"
} |
345457264 | Update goblin dependency to 0.0.17
That would be helpful for me in order to update goblin to latest version.
We started using dependabot in the rustwasm org and it has made staying on
top of the latest versions of dependencies a lot easier. It automatically
makes PRs to update dependencies and you can configure it to merge if ci
passes too.
We should probably set it up for gimli repos, and then you won't have to do
the job manually anymore, Igor :-p
What do you think @philipc?
On Sat, Jul 28, 2018, 8:34 AM Igor Gnatenko notifications@github.com
wrote:
That would be helpful for me in order to update goblin to latest version.
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
https://github.com/gimli-rs/object/issues/62, or mute the thread
https://github.com/notifications/unsubscribe-auth/AAEjSwc6NVkfMyr6toUGr3ojVsrz1HmXks5uLISMgaJpZM4VlFS1
.
@fitzgen Unfortunately it can't automatically port code ;)
@fitzgen I'd be fine if you want to set that up. It lets us know when there are dependencies that need updating at least. I'd prefer if it didn't automatically merge though.
@ignatenkobrain Do you want a new object release too? (and gimli/addr2line?).
up to you guys, for now I just carry this patch along with a package :)
Reducing number of downstream patches is good. So if you have some time, do so please ;)
Added dependabot, but configured not to merge PRs by default.
| gharchive/issue | 2018-07-28T15:34:36 | 2025-04-01T06:44:19.038299 | {
"authors": [
"fitzgen",
"ignatenkobrain",
"philipc"
],
"repo": "gimli-rs/object",
"url": "https://github.com/gimli-rs/object/issues/62",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
126809997 | OSx build & run success
I've had a little bit of trouble getting licode running on my Mac.
Personally, in order to build erizo, erizoAPI & openssl, I had to make a few changes.
I'd be interested in community feedback on whether this is at all useful to anyone else.
Thanks!
| gharchive/pull-request | 2016-01-15T05:24:32 | 2025-04-01T06:44:19.048788 | {
"authors": [
"genehallman",
"lodoyun"
],
"repo": "ging/licode",
"url": "https://github.com/ging/licode/pull/398",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2894250 | Notifications are delivered in the language of the author
Because language is set at the begining of the request, notifications are sent in the locale of the user that performs the action, but it should customized to the receiver's locale
This should be fixed by commit 640df85, so I'm closing this. Feel free to reopen if it keeps failing when you test it in a production environment.
Notifications and messages are now sent with the helper text in the recipient's preferred language (i.e. we do not translate the message content).
However, most accounts won't have a preferred language (since adding a language field to the user creation form was considered a liability) and will use the browser's by default. In those cases the site default rather than the sender's preference will be used. We should consider the possibility of adding a simpler language selection form, maybe at the top or bottom bar.
| gharchive/issue | 2012-01-19T09:18:26 | 2025-04-01T06:44:19.051025 | {
"authors": [
"atd",
"rafaelgg"
],
"repo": "ging/social_stream",
"url": "https://github.com/ging/social_stream/issues/195",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
1063205493 | Dpcpp port ParILU(T)/IC(T)
This PR ports the ParILU(T)/IC(T).
TODO:
[x] Merge #924
[x] Move the ffs fix to #924
[ ] Check the residual changes on float
[x] Merge the kernel file on dpcpp because it duplicates the same kernel.inc into three file. Or put it as inc in the same folder.
I put those suggestion related to performance in TODO.
| gharchive/pull-request | 2021-11-25T06:31:23 | 2025-04-01T06:44:19.063791 | {
"authors": [
"yhmtsai"
],
"repo": "ginkgo-project/ginkgo",
"url": "https://github.com/ginkgo-project/ginkgo/pull/928",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
435201858 | quic 模式有bug~
触发方法,
gost -L=quic://:9823
gost -L=:1082 -F=quic://www.baidu.com:9823
然后找个高清一些视频,然后一直来回拖动进度条,
然后~
然后就凉凉了,不通了,
需要好一会才能恢复传输,
同样是udp传输协议的kcp却没这样的问题.
本地测试也会有问题吗
对,也有问题,这个问题在kcptun里面也出现过,不过kcptun 已经纠正了,
目前测试quic也存在这个问题.
quic默认不会通过心跳保持连接,30秒内无数据传输就会断开。
可以手动开启心跳试试:
gost -L=:1082 -F=quic://:9823?keepalive=true
更新一下库撒,
当时kcptun 出现这样的问题
https://github.com/xtaci/smux/issues/48
| gharchive/issue | 2019-04-19T14:47:54 | 2025-04-01T06:44:19.068178 | {
"authors": [
"f4nff",
"ginuerzh"
],
"repo": "ginuerzh/gost",
"url": "https://github.com/ginuerzh/gost/issues/382",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
190564340 | 能否编译一个android命令行可执行程序
能否编译一个android命令行可执行程序,如果能编译一个apk就更好
交叉编译个arm 的就行.
cd cmd/gost OS=linux GOARCH=arm go build -o /tmp/gost file /tmp/gost
2.6版本中可以使用自定义证书文件来替代默认的随机证书,可以省去证书生成这一步。
做了一个Shadowsocks客户端上用Gost的插件,欢迎试用:https://github.com/xausky/ShadowsocksGostPlugin
| gharchive/issue | 2016-11-20T15:28:30 | 2025-04-01T06:44:19.070239 | {
"authors": [
"edveen",
"ginuerzh",
"virteman",
"xausky"
],
"repo": "ginuerzh/gost",
"url": "https://github.com/ginuerzh/gost/issues/59",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
463176218 | Version mismatch NPM <-> GitHub
On GitHub we have version 0.11.1 (https://github.com/gionkunz/chartist-js/releases/tag/v0.11.1) while on NPM we have version 0.11.2 (apparently this tag doesn't even exist on NPM).
Is it intentional? I've had an issue with that as Dependabot tried to update the package to 0.11.2 and it crashed my application.
Apparently I can't even install 0.11.1 as well:
npm i --save --save-exact chartist@0.11.1
100 verbose stack Error: chartist@0.11.1 postinstall: `node ./rescue-campaign.js`
100 verbose stack Exit status 1
100 verbose stack at EventEmitter.<anonymous> (/Users/viniciuskneves/.nvm/versions/node/v10.16.0/lib/node_modules/npm/node_modules/npm-lifecycle/index.js:301:16)
100 verbose stack at EventEmitter.emit (events.js:198:13)
100 verbose stack at ChildProcess.<anonymous> (/Users/viniciuskneves/.nvm/versions/node/v10.16.0/lib/node_modules/npm/node_modules/npm-lifecycle/lib/spawn.js:55:14)
100 verbose stack at ChildProcess.emit (events.js:198:13)
100 verbose stack at maybeClose (internal/child_process.js:982:16)
100 verbose stack at Process.ChildProcess._handle.onexit (internal/child_process.js:259:5)
101 verbose pkgid chartist@0.11.1
102 verbose cwd /Users/viniciuskneves/workspace/homeday/city-explorer
103 verbose Darwin 18.6.0
104 verbose argv "/Users/viniciuskneves/.nvm/versions/node/v10.16.0/bin/node" "/Users/viniciuskneves/.nvm/versions/node/v10.16.0/bin/npm" "i" "--save" "--save-exact" "chartist@0.11.1"
105 verbose node v10.16.0
106 verbose npm v6.9.0
107 error code ELIFECYCLE
108 error errno 1
109 error chartist@0.11.1 postinstall: `node ./rescue-campaign.js`
109 error Exit status 1
110 error Failed at the chartist@0.11.1 postinstall script.
110 error This is probably not a problem with npm. There is likely additional logging output above.
111 verbose exit [ 1, true ]
When installing version 0.11.2 I get the following error:
Uncaught TypeError: Cannot read property 'window' of undefined
I'm not sure but apparently this PR has some relation to it: https://github.com/gionkunz/chartist-js/pull/776
Hi there. Thanks for reporting. I've pushed 0.11.3 which removes the rescue campaign and with it the issue with chalk. I've also pushed the version to NPM.
| gharchive/issue | 2019-07-02T11:31:57 | 2025-04-01T06:44:19.074628 | {
"authors": [
"gionkunz",
"viniciuskneves"
],
"repo": "gionkunz/chartist-js",
"url": "https://github.com/gionkunz/chartist-js/issues/1179",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1422175925 | MAJ Pop Up import de bénéficiaires et amélioration
###MAJ
actuellement le pop up d'import de bénéficiaire indique
"Il est recommandé de ne pas importer plus d'environ 300 bénéficiaires à la fois."
demande de modification: supprimer cette phrase du pop up
Amélioration:
ajouter un un spiner de progression de l'import lors de l'étape en image ci dessous.
(c'est bien la même image il n'y a pas d'erreur lol) la déifférence c'est que sur la première je n'ai pas encore sélectionner mon fichier sur la seconde oui
:tada: This issue has been resolved in version 1.158.0 :tada:
The release is available on GitHub release
Your semantic-release bot :package::rocket:
| gharchive/issue | 2022-10-25T09:44:57 | 2025-04-01T06:44:19.078784 | {
"authors": [
"Pauldoliveira",
"service-dev-gip-inclusion"
],
"repo": "gip-inclusion/carnet-de-bord",
"url": "https://github.com/gip-inclusion/carnet-de-bord/issues/1197",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1231436953 | Documentation of Architecture
Document backend
Document frontend
Review
[ ] Documentation is written
[ ] Documentation is reviewed
[ ] Build is successful
[ ] Deployment to production works
@Tugark I updated the api arch to reflect our current status.
| gharchive/issue | 2022-05-10T16:54:04 | 2025-04-01T06:44:19.081212 | {
"authors": [
"Saela",
"Tugark"
],
"repo": "gipfeli-io/documentation",
"url": "https://github.com/gipfeli-io/documentation/issues/8",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2307501149 | fix: move pitest and javadoc to new directories
This commit changes the location of pitest and javadoc reports. Pitest reports are now moved to ./docs/gh-pages/pit and javadoc to ./docs/gh-pages/docs.
closes #71
:tada: This PR is included in version 1.0.2-dev.1 :tada:
The release is available on GitHub release
Your semantic-release bot :package::rocket:
| gharchive/pull-request | 2024-05-21T07:07:48 | 2025-04-01T06:44:19.083603 | {
"authors": [
"gipo355",
"gipo999"
],
"repo": "gipo999/test-spi",
"url": "https://github.com/gipo999/test-spi/pull/72",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1235912200 | [Snyk] Security upgrade node-sass from 6.0.1 to 7.0.1
This PR was automatically created by Snyk using the credentials of a real user.Snyk has created this PR to fix one or more vulnerable packages in the `yarn` dependencies of this project.
Changes included in this PR
Changes to the following files to upgrade the vulnerable dependencies to a fixed version:
package.json
yarn.lock
Vulnerabilities that will be fixed
With an upgrade:
Severity
Priority Score (*)
Issue
Breaking Change
Exploit Maturity
696/1000 Why? Proof of Concept exploit, Has a fix available, CVSS 7.5
Regular Expression Denial of Service (ReDoS) SNYK-JS-ANSIREGEX-1583908
Yes
Proof of Concept
(*) Note that the real score may have changed since the PR was raised.
Check the changes in this PR to ensure they won't cause issues with your project.
Note: You are seeing this because you or someone else with access to this repository has authorized Snyk to open fix PRs.
For more information:
🧐 View latest project report
🛠 Adjust project settings
📚 Read more about Snyk's upgrade and patch logic
Learn how to fix vulnerabilities with free interactive lessons:
🦉 Learn about vulnerability in an interactive lesson of Snyk Learn.
Fixed conflicts locally and confirmed changes work - merging and closing.
| gharchive/pull-request | 2022-05-14T08:15:12 | 2025-04-01T06:44:19.093045 | {
"authors": [
"giranm"
],
"repo": "giranm/pd-live-react",
"url": "https://github.com/giranm/pd-live-react/pull/119",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
701160836 | Add avatar component
Fixes #51
Create an avatar.jsx
Add image tag to it.
Use the new component in header.jsx
@salil-naik The failed tests seem to be backend related.. is this something I need to check on ? 😅
@salil-naik The failed tests seem to be backend related.. is this something I need to check on ? 😅
No
Thanks! Renamed it, please check 😄
| gharchive/pull-request | 2020-09-14T14:34:14 | 2025-04-01T06:44:19.117517 | {
"authors": [
"priyanshisharma",
"salil-naik"
],
"repo": "girlscript/feminist-bible-phase-2",
"url": "https://github.com/girlscript/feminist-bible-phase-2/pull/54",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1001770561 | Java 3.3: Characters and Booleans
Description 📜
Add audio, documentation, and video regarding the Characters and Booleans.
Domain of Contribution 📊
[x] Java
Location of File to be added
The files should be added inside the Characters and Booleans folder which is inside the Primitive Data Types folder present in the Java main folder. The documentation file can be in the .md/.py/.ipynb format. Audio should be in .mp3 format and to be submitted through a Google drive link. Video can be in any format and should be submitted through a Google Drive link.
Note:
You are required to comment with \assign CONTENT_TYPE, so that other content types can be created at the same time.
Failing to comment in the above manner will lead to the removal of assignment from the issue even if the bot assigns.
You are required to do a PR within 7 days otherwise you will be unassigned and others will be assigned.
Issues will be assigned on a first come first serve basis.
/assign Document
\assign document
/assign Document
You have been already assigned an issue. See #1107.
\assign document
Documentation assigned to @ISHITA-ROY016.
\assign document
/assign Audio
Audio content assigned to @tannushkaa.
\assign Document
\assign video
\assign Video
@firefistacez go with video
| gharchive/issue | 2021-09-21T03:34:09 | 2025-04-01T06:44:19.125163 | {
"authors": [
"AkMo3",
"ISHITA-ROY016",
"Palak-Srivastava",
"Yashesvinee",
"firefistacez",
"garvitraj",
"tannushkaa",
"techabhi08"
],
"repo": "girlscript/winter-of-contributing",
"url": "https://github.com/girlscript/winter-of-contributing/issues/1466",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
998424177 | OpenSource : 4.3 Explain everything related to the Season of KDE
Description :
Explain everything related to the Season of KDE
Note :
Changes should be made inside the Open_Source/ directory & Opensource branch.
Issue only for GWOC'21 contributors.
Issue will be assigned on a first come first serve basis, 1 Issue == 1 PR == 1 Contributor
For Contributor please don't create the same issue again, before creating any issue do visit OpenSource Milestone
Doc name should same as the issue title
Must Follow : Contributing Guidelines & Code of Conduct before start Contributing.
Happy Contributing 🙌``
Can i work on this issue @Aryamanz29 ??
Can I contribute on this issue?
@KhafiaAyyub Already assigned one issue to you, Complete that one first then I'll assign this to you 😉
Okay, thank you!!
@Aryamanz29 Can I work on this one? :)
Can i work on this @Aryamanz29 ?
Can i work on this @Aryamanz29 ?
Sure @Harshkumar62367 ; Assigning this to you
| gharchive/issue | 2021-09-16T16:50:55 | 2025-04-01T06:44:19.131768 | {
"authors": [
"Aryamanz29",
"Harshkumar62367",
"KhafiaAyyub",
"mayankkuthar"
],
"repo": "girlscript/winter-of-contributing",
"url": "https://github.com/girlscript/winter-of-contributing/issues/290",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1014543883 | DSA 2.2.6.3 Exponential Search(Searching and Sorting)
Description
Give an introduction and explain the working of an algorithm for Exponential Search.
Task
[ ] Documentation
[ ] Audio
[ ] Video
Note
While requesting the issue to be assigned, please mention your NAME & BATCH NUMBER.
Kindly refer to Issue Guidelines.
In case you see that the issue has already been assigned, try to request for a different task within the same issue.
Changes should be made inside the DSA/directory & DSA branch.
Name of the file should be same as the issue.
This issue is only for 'GWOC' contributors of 'DSA' Domain.
/assign
/assign documentation
/assign
/assign
| gharchive/issue | 2021-10-03T21:19:30 | 2025-04-01T06:44:19.136809 | {
"authors": [
"geeky01adarsh",
"nishant-giri",
"pjdurden"
],
"repo": "girlscript/winter-of-contributing",
"url": "https://github.com/girlscript/winter-of-contributing/issues/4318",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1023514336 | Explain Everything related to Django#4669
Description 📜
Please include a summary of the change and which issue is fixed. Please also include relevant motivation and context.
Fixes #4669
Type of change 📝
[x] Doc#umentation (Content Creation in the form of codes or tutorials)
Domain of Contribution 📊
[x] Open #Source
Checklist ✅
[x] I follow Contributing Guidelines & Code of conduct of this project.
[x] I have performed a self-review of my own code or work.
[x] I have commented my code, particularly in hard-to-understand areas.
[x] My changes generates no new warnings.
[x] I'm GWOC'21 contributor
Screenshots / Gif (Optional) 📸
@palakshivlani-11 Any updates on this PR?
| gharchive/pull-request | 2021-10-12T08:23:20 | 2025-04-01T06:44:19.141790 | {
"authors": [
"Aryamanz29",
"palakshivlani-11"
],
"repo": "girlscript/winter-of-contributing",
"url": "https://github.com/girlscript/winter-of-contributing/pull/5662",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1059080249 | Datascience with python
Description 📜
Topic: Data Science with Python : INTRODUCTION TO NUMPY
Video Link: INTRODUCTION TO NUMPY
Under the Data Science with Python domain, In the directory "DS Resources/Machine Learning", I made one more file "Introduction To NumPy" inside that file I gave the link of my video contribution introduction to NumPy, Sorry for the commits, I didn't know commits are marked. And as a beginner, I didn't know about this. I hope my files are correct.
Fixes #434
Type of change 📝
[ ] Audio (Should be in mp3 format Includes speech clarity, Concise ,Low distortion)
[x] Video (Animations, screen-recordings, presentations and regular explanatory films are all possibilties etc)
[ ] Documentation (Content Creation in the form of codes or tutorials)
[ ] Other (If you choose other, Please mention changes below)
Domain of Contribution 📊
[x] Datascience with #Python
Checklist ✅
[x] I follow Contributing Guidelines & Code of conduct of this project.
[x] I have performed a self-review of my own code or work.
[x] I have commented my code, particularly in hard-to-understand areas.
[x] My changes generates no new warnings.
[x] I'm GWOC'21 contributor
Screenshots / Gif (Optional) 📸
There are in total 941 files in your PR. I guess you haven't followed the workflow right. Kindly check it. @shripaddhopate
| gharchive/pull-request | 2021-11-20T07:02:15 | 2025-04-01T06:44:19.148423 | {
"authors": [
"prathimacode-hub",
"shripaddhopate"
],
"repo": "girlscript/winter-of-contributing",
"url": "https://github.com/girlscript/winter-of-contributing/pull/8103",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2552416925 | Fix: url with filters in gui client
Fix the url when the filters are selected, to one that exists
Changes
Change the url to a correct one for when the filters are applied in https://git-scm.com/downloads/guis
Context
Fix #1886
@SegoCode what do you think, should we also add a commit to redirect gracefully from the now-404ing URLs? Something like this?
diff --git a/content/downloads/guis/_index.html b/content/downloads/guis/_index.html
index 1429c7e91..819c2a463 100644
--- a/content/downloads/guis/_index.html
+++ b/content/downloads/guis/_index.html
@@ -5,6 +5,8 @@ title: "Git - GUI Clients"
url: /downloads/guis.html
aliases:
- /downloads/guis/index.html
+- /download/guis/index.html
+- /download/guis.html
---
<div id="main">
Something like this?
Hmm. That does not quite work, because the ?os=linux part is lost in that redirection. Let me dig deeper.
@SegoCode I think I've found it. I have to patch layouts/alias.html where it tries to preserve window.location.hash (i.e. any anchor) so that it also tries to preserve window.location.search. Will push my proposed solution in a moment.
I have to patch layouts/alias.html where it tries to preserve window.location.hash (i.e. any anchor) so that it also tries to preserve window.location.search.
Here is that fix: https://github.com/git/git-scm.com/pull/1887/commits/40fc16d2a8c56394731c7f217e73f4ae08cbe30d
And after that, this idea is implemented in https://github.com/git/git-scm.com/pull/1887/commits/f00a03af6a37afb361bd0f0b3ff2f1bd7b60fe26, which I verified locally to work.
@SegoCode what do you think? Are these two commits okay with you?
looks good! and if you have tested it in your local and it works as expected, it's OK for me
@SegoCode Thank you for noticing, reporting and fixing the bug!
Please note that due to git-scm.com using Cloudflare's caches, the fix may be delayed for up to 4 hours (and more, if your browser cache holds onto /js/application.min.js for some time).
| gharchive/pull-request | 2024-09-27T09:10:04 | 2025-04-01T06:44:19.207967 | {
"authors": [
"SegoCode",
"dscho"
],
"repo": "git/git-scm.com",
"url": "https://github.com/git/git-scm.com/pull/1887",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
333409532 | How to add a margin to the bottom of a table before it moves on to the next page?
I want the table to break above the footer.
this.element.createTableHygiene = function (slide, headerRows, bodyRows, y_axis) {
var concatObj = [headerRows].concat(bodyRows);
var test = slide.addTable(concatObj, {
x: 0.5,
y: y_axis,
w: 7.5,
color: '000000',
fontSize: 11,
border: {
pt: '1',
color: '000000'
},
margin: 8,
autoPage: true,
newPageStartY: 2.2,
colW: [3.25, 1.25, 3],
valign: 'middle',
align: 'center',
fill: '004684',
color: 'ffffff',
bold: true
});
this.element.createTableHygiene(slide, obj3, obj4, 5.75);
I figured it out.
A negative lineBreak attribute solves this problem.
"A negative lineBreak attribute solves this problem"
can you please help me with this ..facing the same problem.. I want to define a bottom margin for slide.
@jsvishal - please see /demo - there are examples of line height adjustments, etc.
| gharchive/issue | 2018-06-18T19:46:09 | 2025-04-01T06:44:19.222349 | {
"authors": [
"carlpage",
"gitbrent",
"jsvishal"
],
"repo": "gitbrent/PptxGenJS",
"url": "https://github.com/gitbrent/PptxGenJS/issues/356",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
1571010474 | Adding style as HTML table is parsed
Hi there!
I've added style parsing to sheet_add_dom function. I'd be glad to contribute with a PR if you wish.
Only few slight changes to sheet_add_dom allow to recognize fll and text color, font weight , identation and text alignement.
Let me know if you're interested.
Do you mean that the following code:
<th>This text is bold and centred by default</th>
or
<td style="color: red; font-weight: bold;">This text is red and bold by style</td>
will appear in color and will be bold in the worksheet?
yes, kind of. Fore and fill color, text alignement font weight (bold / italic) ...
I've been using it a lot, as our application allow export of many html tables to excel files.
@ CarlVerret
I already did it myself.
Do you know a way to format the cells? Like keep leading zeros of numbers. E.g. format them as text.
| gharchive/issue | 2023-02-04T16:12:11 | 2025-04-01T06:44:19.225935 | {
"authors": [
"CarlVerret",
"webdbase"
],
"repo": "gitbrent/xlsx-js-style",
"url": "https://github.com/gitbrent/xlsx-js-style/issues/19",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1747103048 | 🛑 invidious.kavin.rocks is down
In 33e3f85, invidious.kavin.rocks (https://invidious.kavin.rocks/) was down:
HTTP code: 502
Response time: 15318 ms
Resolved: invidious.kavin.rocks is back up in 551c4e2.
| gharchive/issue | 2023-06-08T05:19:24 | 2025-04-01T06:44:19.249379 | {
"authors": [
"gitetsu"
],
"repo": "gitetsu/invidious-instances-upptime",
"url": "https://github.com/gitetsu/invidious-instances-upptime/issues/1550",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1797926709 | 🛑 invidious.privacydev.net is down
In 9a54e97, invidious.privacydev.net (https://invidious.privacydev.net/) was down:
HTTP code: 0
Response time: 0 ms
Resolved: invidious.privacydev.net is back up in a3cd964.
| gharchive/issue | 2023-07-11T01:52:08 | 2025-04-01T06:44:19.252933 | {
"authors": [
"gitetsu"
],
"repo": "gitetsu/invidious-instances-upptime",
"url": "https://github.com/gitetsu/invidious-instances-upptime/issues/1861",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1614906298 | 🛑 invidious.flokinet.to is down
In b671e80, invidious.flokinet.to (https://invidious.flokinet.to/) was down:
HTTP code: 0
Response time: 0 ms
Resolved: invidious.flokinet.to is back up in 7c34a9f.
| gharchive/issue | 2023-03-08T09:08:44 | 2025-04-01T06:44:19.256213 | {
"authors": [
"gitetsu"
],
"repo": "gitetsu/invidious-instances-upptime",
"url": "https://github.com/gitetsu/invidious-instances-upptime/issues/530",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2035810712 | 🛑 search.ahwx.org is down
In 1241fc6, search.ahwx.org (https://search.ahwx.org/) was down:
HTTP code: 0
Response time: 0 ms
Resolved: search.ahwx.org is back up in 3d22630 after 9 minutes.
| gharchive/issue | 2023-12-11T14:29:50 | 2025-04-01T06:44:19.259787 | {
"authors": [
"gitetsu"
],
"repo": "gitetsu/librex-instances-upptime",
"url": "https://github.com/gitetsu/librex-instances-upptime/issues/1881",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1590415457 | 🛑 librex.terryiscool160.xyz is down
In 025e85c, librex.terryiscool160.xyz (https://librex.terryiscool160.xyz/) was down:
HTTP code: 0
Response time: 0 ms
Resolved: librex.terryiscool160.xyz is back up in 48f9da6.
| gharchive/issue | 2023-02-18T17:45:02 | 2025-04-01T06:44:19.262548 | {
"authors": [
"gitetsu"
],
"repo": "gitetsu/librex-instances-upptime",
"url": "https://github.com/gitetsu/librex-instances-upptime/issues/57",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1263574538 | Feat: Make approval counts unique by reviewer
🎯 Goal
Do not increment infinite counters : each reviewer have a unique state for a specific PR :
APPROVED, CHANGES_REQUESTED, DISMISSED or PENDING.
To note: COMMENTED state is handled differently, because we can add comments without affecting approval states above
🚀 What to expect
The sum of all APPROVED, CHANGES_REQUESTED, DISMISSED or PENDING states is equal to the number of reviewers. And of course it's the last submitted state for each reviewer.
COMMENTED state can be incremented infinitely.
Example with 1 reviewer, who has left 2 simple commented reviews :
Hello @julienherrero
Sorry for my late answer.
Thank you very much for your contribution.
Rgs
| gharchive/pull-request | 2022-06-07T16:36:19 | 2025-04-01T06:44:19.290803 | {
"authors": [
"github-actions-tools",
"julienherrero"
],
"repo": "github-actions-tools/gh-reviews-count",
"url": "https://github.com/github-actions-tools/gh-reviews-count/pull/3",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
752467955 | Explain how workflow run URL is constructed
Why:
There are two different GitHub Actions workflow run URLs:
The workflow overview URL, e.g. https://github.com/sindresorhus/refined-github/actions/runs/387494159
A specific job URL, e.g. https://github.com/sindresorhus/refined-github/runs/1464813172
The docs don't make it clear which of these URL parameters the GITHUB_RUN_ID environment variable is. This PR tries to clarify this.
What's being changed:
I've added one sentence after the environment variables table:
Check off the following:
[x] All of the tests are passing.
[X] I have reviewed my changes in staging. (look for the deploy-to-heroku link in your pull request, then click View deployment)
[X] For content changes, I have reviewed the localization checklist
[X] For content changes, I have reviewed the Content style guide for GitHub Docs.
@ha1dyansyah I don't see how that PR is related to this one. Can you please clarify?
@FloEdelmann Thanks so much for opening a PR! I'll get this triaged for review 🌟
@janiceilene Any news?
@FloEdelmann Thanks for your patience! Our small team is working our way through reviewing all of the amazing contributions ✨
@janiceilene Sorry to bother you again, I just want to keep this PR from being closed.
Hi @FloEdelmann 👋
Thanks for this contribution! It looks like there are a couple of things you'd like to address in this PR:
Is your main concern that the URL paths seem unclear about which is the workflow ID, and which is the job ID? For example:
https://github.com/sindresorhus/refined-github/actions/runs/387494159
https://github.com/sindresorhus/refined-github/runs/1464813172
For the workflow run URL, are you needing to retrieve this from within a job running on a runner? Otherwise, you could retrieve this using the API: https://docs.github.com/en/free-pro-team@latest/rest/reference/actions#get-a-workflow-run (html_url)
Hi @martin389, it's actually both:
I needed to retrieve the workflow run URL from within a job:
This workflow (which is triggered by a daily cron job) runs this script to regularly update a comment in this issue, in which I wanted to link to the current specific workflow run.
https://github.com/OpenLightingProject/open-fixture-library/blob/5cd0b9d9ffefd2aa3b0e648664ad807269155a90/tests/external-links.js#L215
As the docs were not clear on how to construct the workflow run URL, I found out by trial and error and thus opened this PR to save others from making the same mistake :)
@martin389 @janiceilene Again, I just want to keep this PR from being closed. Please take all the time you need. Happy holidays! :fireworks:
Thank you @FloEdelmann! 🎉 I'm checking this approach with our support folks 👍
Friendly bump again :)
Friendly bump again :)
Following up on this 👍
Following up on this 👍
The support folks are happy with this approach 👍 😄 Preparing to ship this update. 🚢
The support folks are happy with this approach 👍 😄 Preparing to ship this update. 🚢
A preview of this update is available in staging: https://docs-1651--patch-1.herokuapp.com/en/free-pro-team@latest/actions/reference/environment-variables#default-environment-variables
A preview of this update is available in staging: https://docs-1651--patch-1.herokuapp.com/en/free-pro-team@latest/actions/reference/environment-variables#default-environment-variables
(Fixed merge conflict)
(Fixed merge conflict)
👍 The new note will be available here, once published: https://docs.github.com/en/free-pro-team@latest/actions/reference/environment-variables#default-environment-variables
👍 The new note will be available here, once published: https://docs.github.com/en/free-pro-team@latest/actions/reference/environment-variables#default-environment-variables
Great, thanks 😊
Great, thanks 😊
| gharchive/pull-request | 2020-11-27T20:52:56 | 2025-04-01T06:44:19.361368 | {
"authors": [
"FloEdelmann",
"janiceilene",
"martin389"
],
"repo": "github/docs",
"url": "https://github.com/github/docs/pull/1651",
"license": "CC-BY-4.0",
"license_type": "permissive",
"license_source": "github-api"
} |
785532240 | Update using-ssh-over-the-https-port.md
The user for connecting must be "git"
Why:
Improve documentation
What's being changed:
doc
Check off the following:
[ ] I have reviewed my changes in staging. (look for the deploy-to-heroku link in your pull request, then click View deployment)
[ ] For content changes, I have reviewed the localization checklist
[ X ] For content changes, I have reviewed the Content style guide for GitHub Docs.
@bon77 Thanks so much for opening a PR! I'll get this triaged for review ⚡
@bon77 Thanks so much for opening a PR! I'll get this triaged for review ⚡
It looks like our new action inadvertently closed this 🙃 Sorry about that! Reopening and triaging this for review 💛
| gharchive/pull-request | 2021-01-13T23:30:50 | 2025-04-01T06:44:19.366034 | {
"authors": [
"bon77",
"janiceilene"
],
"repo": "github/docs",
"url": "https://github.com/github/docs/pull/2867",
"license": "CC-BY-4.0",
"license_type": "permissive",
"license_source": "github-api"
} |
414954476 | https://github.com/github/explore/issues/588#issuecomment-466796720
https://github.com/github/explore/issues/588#issuecomment-466796720
Thank you
Good
https://github.com/github/explore/issues/602#issuecomment-473997427
| gharchive/issue | 2019-02-27T05:42:27 | 2025-04-01T06:44:19.368733 | {
"authors": [
"Ekkarat304",
"Rixi2203",
"cmayes813",
"suker333"
],
"repo": "github/explore",
"url": "https://github.com/github/explore/issues/602",
"license": "CC-BY-4.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1716150440 | Re-implement gh-migration-analyzer as gh gei inventory-report
Our great gh-migration-analyzer open source project is a CLI tool which helps you to plan your GitHub migration by creating a CSV "inventory" of repos in a GitHub organization.
But having this tool inside the gh gei CLI would make it much more easily accessible to customers, without having to download another tool or set up Node.js.
It'll also be more maintainable for us - we don't have strong JavaScript expertise within the Migration Tools team!
Let's bring its functionality in gh gei! 🥳
We should add a gh gei inventory-report command, like the existing command in gh ado2gh, which takes the organization and GitHub API URL as arguments and generates a CSV with a format like the following:
organization,repository_name,last_pushed_at,is_archived,pull_requests_count,issues_count,releases_count,disk_usage_in_bytes
this is probably a duplicate of #493
@dylan-smith Good spot! I did see it, but was thrown off by the repo-list bit. I might close that one and split it into two distinct issues.
Closing in favour of https://github.com/github/gh-gei/issues/493
| gharchive/issue | 2023-05-18T19:25:45 | 2025-04-01T06:44:19.372693 | {
"authors": [
"dylan-smith",
"timrogers"
],
"repo": "github/gh-gei",
"url": "https://github.com/github/gh-gei/issues/993",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
55718677 | Added venv folder to .gitignore
Added venv files to Python.gitignore
Thanks, but see Global/VirtualEnv.gitignore for this.
| gharchive/pull-request | 2015-01-28T06:15:36 | 2025-04-01T06:44:19.374090 | {
"authors": [
"arcresu",
"rukmal"
],
"repo": "github/gitignore",
"url": "https://github.com/github/gitignore/pull/1384",
"license": "cc0-1.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
81590314 | Added Maven .gitignore model
Just to contribute and, based on Java.gitignore, also ignore Maven target folder.
I'm not sure I understand - should we be updating this template?
My mistake! I couldn't find this template before so I created one. Please ignore my pull request.
| gharchive/pull-request | 2015-05-27T19:30:27 | 2025-04-01T06:44:19.375800 | {
"authors": [
"lossurdo",
"shiftkey"
],
"repo": "github/gitignore",
"url": "https://github.com/github/gitignore/pull/1533",
"license": "cc0-1.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
146391546 | Ask before creating a new fork if one already exsists
This way scripts for example wouldn't be given the chance to wreck your PR, because you can just say no to overwriting your existing branch.
Hi I don't understand what you're asking here. Could you elaborate more and what workflow do you intend to solve with a proposed change? Maybe with examples?
Sorry for the late reply.
When running hub fork when you have a existing forked branch on github, hub just overwrites the github branch, therefore removing any pushed commits.
What does it mean that hub just overwrites the github branch? Which branch gets overwritten? hub fork doesn't push any commits to a remote and doesn't deal with branches at all. Plus, when your fork already exists, hub fork does nothing but ensure that the git remote is set up correctly.
@TheDoctorsLife I didn't understand your initial problem; do you perhaps have more information for me to treat this as a bug? I'd really like to fix things if hub fork can lead to loss of information. But right now, I can't see how that can happen, since an existing fork on GitHub never gets overwritten by a new one.
What I had happening to me, is if I used hub fork it would appear to delete the existing and replace it with the local copy.
it would appear to delete the existing and replace it with the local copy.
Hub can never delete an existing repository on GitHub.com, not even a fork. It doesn't have any functionality that is able to delete or replace a repository. It can only create new ones.
Can you reproduce your problem? Would you share exact steps you've taken so I might try for myself?
I tried it out again, (was in a script) and the bug/issue I was experiencing appears to be gone. Closing issue.
@TheDoctorsLife Thanks for the update! 👍
| gharchive/issue | 2016-04-06T18:07:41 | 2025-04-01T06:44:19.380720 | {
"authors": [
"TheDoctorsLife",
"mislav"
],
"repo": "github/hub",
"url": "https://github.com/github/hub/issues/1158",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
39587226 | Added pnd.coffee
A easy way to get monster information of Puzzle and dragon, the mobile game.
We are actually moving away from adding scripts to repository in favor of separate npm packages per scripts. See #1113 for details, and let us know if you have any questions about getting going with that!
| gharchive/pull-request | 2014-08-06T03:43:45 | 2025-04-01T06:44:19.381984 | {
"authors": [
"Arthraim",
"technicalpickles"
],
"repo": "github/hubot-scripts",
"url": "https://github.com/github/hubot-scripts/pull/1542",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
59464635 | *.pro extension is also used for KiCad project files
Now *.pro is recognised as prolog source code.
Repos in question:
https://github.com/Miceuz/ultimate-temperature-controller
https://github.com/Miceuz/soil-moisture-sensor-analog
QT Creator also uses *.pro as extension for its project files.
QT Creator files are easily identified by the first 5 lines of the file (especially line 3):
#-------------------------------------------------
#
# Project created by QtCreator 2012-06-06T19:03:23
#
#-------------------------------------------------
https://github.com/search?q=%23+Project+created+by+QtCreator&type=Code&utf8=✓
@Miceuz #2154 fixes this. It was recently merged and should be in the next release.
| gharchive/issue | 2015-03-02T10:42:00 | 2025-04-01T06:44:19.385417 | {
"authors": [
"Miceuz",
"pchaigno",
"thorade"
],
"repo": "github/linguist",
"url": "https://github.com/github/linguist/issues/2186",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
184660662 | Single quote inside heredoc causes remainder of file to be rendered as string
Currently this is still considered a string opening quote, even though it doesn't act as one
cat << EOF
test ' test
EOF
echo foo bar
# Note that these are still highlighted as if they were a string
I'll take a look at fixing this when/if I have time, but otherwise I think this is worth fixing anyway.
Looks like this is a bug in https://github.com/atom/language-shellscript, judging by the grammars file.
@cdown You can close this, because this is an upstream issue. When/if it gets fixed, Linguist will pick the latest changes up automatically at the next release.
Yeah, sure.
| gharchive/issue | 2016-10-22T23:19:13 | 2025-04-01T06:44:19.388312 | {
"authors": [
"Alhadis",
"cdown"
],
"repo": "github/linguist",
"url": "https://github.com/github/linguist/issues/3290",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
56481908 | Add cpplint.py to vendor.yml
cpplint.py is Google's Python script used for linting C++ files.
I have a small C++ project with cpplint.py included mistakenly making
Python the main language of my project.
:+1: makes sense. Thanks @philix.
It'll be another week or so before this change is live on GitHub.
| gharchive/pull-request | 2015-02-04T05:22:28 | 2025-04-01T06:44:19.390153 | {
"authors": [
"arfon",
"philix"
],
"repo": "github/linguist",
"url": "https://github.com/github/linguist/pull/2075",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
215026949 | Add C header for more accurate classification from https://github.com/Wetitpig/osmctools
The repository with misclassified files: https://github.com/Wetitpig/osmctools
All the files in the repository are created by breaking a large c file into several small c files. However, after breaking, linguist show c++ as the language for three of those files, which doesn't make sense at all.
Can anyone suggest how to improve linguist by detecting c and c++ more accurately?
But did it occupy so much space that you cannot afford? Or else why can't you put it into header samples?
By the way, how long does it take to refresh the language statistics bar?
But did it occupy so much space that you cannot afford? Or else why can't you put it into header samples?
Yes, space is a concern (see #2117 for instance). I'm also worried that, in part because we don't have much samples, adding a sample file with mostly data will screw the Bayesian classifier.
By the way, how long does it take to refresh the language statistics bar?
It should happen in the next hour after each push to the repository. If not, you can contact GitHub support and they'll look into it.
| gharchive/pull-request | 2017-03-17T14:54:14 | 2025-04-01T06:44:19.393551 | {
"authors": [
"Wetitpig",
"pchaigno"
],
"repo": "github/linguist",
"url": "https://github.com/github/linguist/pull/3521",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
226317122 | DnsimpleProvider some objects with not attribute
When I try to Import Data via DnsimpleProvider the following AttributeError are given:
AttributeError: 'DnsimpleProvider' object has no attribute '_data_for_ALIAS'
AttributeError: 'DnsimpleProvider' object has no attribute '_data_for_URL'
The URL record is something specific and the ALIAS too if I understand this document correct.
In some ways it should be possible to get the data and giving an error that some settings are not dumped and not the complete dump get revoked.
This is more a feature request to make migration more easier.
AttributeError: 'DnsimpleProvider' object has no attribute '_data_for_ALIAS'
ALIAS records aren't currently supported, but hopefully will be to the degree they can https://github.com/github/octodns/issues/26.
AttributeError: 'DnsimpleProvider' object has no attribute '_data_for_URL'
URL (redirect) records aren't a supported type either.
Both are non-standard record types that vary from provider to provider (if they're available at all.) That makes them difficult to support. I'd like to get ALIAS support in as they're common enough, but the URL type might be a bit tougher. We could switch things around a bit to make unsupported types ignored by default. I have mixed feelings about that as it's a recipe for surprises when someone goes to import their zone as you're doing and some of the records are just ignored. The way it is now is a safe-than-sorry choice. Thoughts welcome.
@ross thank you for the response (and for the tool).
My general Idea was to now break and throw an error if those not supported attributes pop up. But to write the file with all other data and leave the not supported data out.
The important is that the result need to be marked as "not ready to use" but you would be able to resolve that without looking at the GUI/Browser because you know the records that make trouble and need to be looked at.
Currently I need to look over some domains "by hand" to identify the records and solve the issue. After that go back to the import.
Just my 2 cent on this.
My general Idea was to now break and throw an error if those not supported attributes pop up. But to write the file with all other data and leave the not supported data out.
It's probably make sense to have some sort of --ignore-unsupported flag to do that. The way things are currently plumbed would need to change a bit to accomplish that and there'd need to be a bit of changes to the providers themselves, but it should be doable. It'd be somewhat related to the TODO task #3.
something like --ignore-unsupported would be awesome and should be implemented to take off the the pain if one company decide to implement something new ...
ALIAS support is coming in https://github.com/github/octodns/issues/47. URL behavior will be tougher to add since it varies so much by provider. We've actually ended up implementing our own redirect service b/c of it :slightly_frowning_face:
Forgot the other half of the reason we're doing our own URL style records, we wanted HTTPS and most (maybe none) of the providers support that at the moment.
thank you @ross for the update on this.
| gharchive/issue | 2017-05-04T15:20:57 | 2025-04-01T06:44:19.407169 | {
"authors": [
"jalogisch",
"ross"
],
"repo": "github/octodns",
"url": "https://github.com/github/octodns/issues/30",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
727164752 | [translation] translated checklist in sections for PL
[x] Have you followed the contributing guidelines?
[x] Have you explained what your changes do, and why they add value to the Guides?
Please note: we will close your PR without comment if you do not check the boxes above and provide ALL requested information.
I translated the starting-a-project checklist into Polish
Thanks again @olsza!
| gharchive/pull-request | 2020-10-22T08:24:06 | 2025-04-01T06:44:19.409929 | {
"authors": [
"MikeMcQuaid",
"olsza"
],
"repo": "github/opensource.guide",
"url": "https://github.com/github/opensource.guide/pull/1916",
"license": "CC-BY-4.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1946310188 | Bradfordpatch1assets/js
https://github.com/cli/cli/discussions/8200#discussion-5744066https://codespaces.new/github/opensource.guide/pull/3058?quickstart=1https://github.com/codespaces/badge.svgbradford80USA:patch-1git fetch [] [ […]] git fetch []
git fetch --multiple [] [( | )…] git fetch --all []
DESCRIPTIONhttps://github.com/matter-labs/zksync-web-era-docs/pull/750/files#conversations-menu
[x] Have you followed the contributing guidelines?
[x] Have you explained what your changes do, and why they add value to the Guides?
Please note: we will close your PR without comment if you do not check the boxes above and provide ALL requested information.
Locked and reported
| gharchive/pull-request | 2023-10-17T00:27:14 | 2025-04-01T06:44:19.414463 | {
"authors": [
"ahpook",
"bradford80USA"
],
"repo": "github/opensource.guide",
"url": "https://github.com/github/opensource.guide/pull/3148",
"license": "CC-BY-4.0",
"license_type": "permissive",
"license_source": "github-api"
} |
359599694 | Fail instead of Error if unable to setup DB
Closes https://github.com/github/orchestrator/issues/609
Seems like CI consistently fails at the same spot. Possibly due to this very change.
I've added a cat $test_logfile that's found elsewhere in tests/integration/test.sh. Let's see what CI tells us.
Hmmm, well, I think we need the change I added, but the fail is not coming from the integration tests, it's the units: 2018-09-13 17:33:58 FATAL Error 1045: Access denied for user ''@'localhost' (using password: NO) 🤔
I've found at least one unit test that is trying to connect to the database:
https://github.com/github/orchestrator/blob/f7533848fcd6cae6506ed8fc2d79fa7c751389c3/go/inst/cluster_test.go#L59
https://github.com/github/orchestrator/blob/622e979722bf433232ae267727933ba055cdbfed/go/inst/cluster.go#L58
https://github.com/github/orchestrator/blob/41d02cdd021875582ca5c7867f14d895686ea58d/go/inst/resolve_dao.go#L371
https://github.com/github/orchestrator/blob/c2cfa3e2d3cd5853b2a5ce01ce49a591190f905c/go/db/db.go#L366
Which hits this change: https://github.com/github/orchestrator/blob/c2cfa3e2d3cd5853b2a5ce01ce49a591190f905c/go/db/db.go#L148
Not sure what the right course of action with this is...
Upon commenting out the offending call here:
https://github.com/github/orchestrator/blob/622e979722bf433232ae267727933ba055cdbfed/go/inst/cluster.go#L58
I was able to get the remainder of the unit tests to pass successfully: this is the only place where the database must be used for the unit tests.
| gharchive/pull-request | 2018-09-12T18:27:38 | 2025-04-01T06:44:19.420608 | {
"authors": [
"bbuchalter",
"shlomi-noach"
],
"repo": "github/orchestrator",
"url": "https://github.com/github/orchestrator/pull/618",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
999972915 | Generator --parent option: add more tests and fix edge case
Summary
(Continued from #1073)
I found an edge case: when you both have ApplicationComponent defined in your project and configure a component_parent_class, ApplicationComponent was always used.
I added more tests, fixed the issue and refactored ComponentGenerator#parent_class.
@joelhawksley I'm not sure it makes sense to add a CHANGELOG entry in that case
| gharchive/pull-request | 2021-09-18T08:37:14 | 2025-04-01T06:44:19.439312 | {
"authors": [
"Spone"
],
"repo": "github/view_component",
"url": "https://github.com/github/view_component/pull/1074",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1741232471 | Doesn't sync with the new action
Describe the bug
Creating new action file in the new branch doesn't shows up the actions in the workflows section.
I have created a new branch from the main branch and added a new workflow file test.yml and tried two scenario to run the action in VSCode
I have committed and pushed the file but it doesn't shows the list action in the workflow action
I have merged that in the main branch and after that while clicking on refresh it showing the list
To Reproduce
Steps to reproduce the behavior:
create a new branch
add new workflow test.yml
add, commit and push the code
and check the Workflow section, newly added workflow is not listed
Expected behavior
It can show the list of newly created workflow action
Screenshots
If applicable, add screenshots to help explain your problem.
Extension Version
v0.25.7
Additional context
I am running in the Mac
The extension uses the actions workflows API to populate the list of workflows in the sidebar. It is therefor limited by/ consistent with workflow creation in all of Github Actions. It mirrors the behavior of GitHub web UI.
For a workflow to be created (and show up in the sidebar) it needs to either: a) be checked into the default branch or b) have a run associated with it. If your workflow has a workflow dispatch trigger, a workaround would be to introduce another trigger like push to trigger one run.
| gharchive/issue | 2023-06-05T08:15:18 | 2025-04-01T06:44:19.445084 | {
"authors": [
"elbrenn",
"mr250792"
],
"repo": "github/vscode-github-actions",
"url": "https://github.com/github/vscode-github-actions/issues/209",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
269962855 | In-step highlighting of multiple code fragments
I am looking the ability to highlight code fragments simultaneously in two separate code blocks on the same slide.
My use cases include:
Compare and contrast equivalent code in two different languages
Relate source code to the output it generates
(In the latter case one of the blocks isn't really source code.)
Do you think that this might ever be supported?
Hi @jacg thanks for yet another interesting feature request. It so happens I am in the early stages of working on major new GitPitch release. That release will introduce a number of new features, including some subscription-only features, one of which I've provisionally named synchronized code-presenting.
I believe this new sync feature will give you the kind of behavior you have described. For example, what I envisage for synchronize code-presenting would support the following types of scenarios:
1. Input -> Output
Stepping through a shell script and simultaneously seeing the corresponding console output being automatically updated.
2. Code -> State
Stepping through code and simultaneously seeing state changes or corresponding console output.
3. Code - > Code
Stepping through simultaneous code blocks, loops, functions for side-by-side comparison, discussion.
As you can see these are all variations on a theme, what I call (for want of a better name) synchronized code-presenting. And I think this will handle your uses cases as long as the outputs you referred to are text-based outputs.
I do not yet have a date for this upcoming relase but I will update this thread and the GitPitch blog with further news as it becomes available. Just fyi, this same release will also be adding support for a number of subscription-only features built around GitPitch presentations being served from within private repos which may also be of interest? Cheers, David.
Hi David, (I'm assuming that you get these without being @mentioned explicitly)
The features you mention look very interesting, and indeed closely related to what I wanted.
Presenting from private repos is not of particular interest for me at present. Indeed, my current needs are mostly towards the other end of the openness spectrum, so much so that I'm just slightly worried by hearing about subscription-only features.
I do not mean this to be critical in the slightest, and respect your choice of business model; I'm merely mentioning how it relates to my current needs, which may evolve in the future.
Could you point me to some info on how your subscription model works?
Hi @jacg thanks for this great feedback. In truth one of the reasons I mentioned some of the upcoming subscription-only features was to start having this type of conversation with my users.
I think the single, most important clarification I need to make right now is this:
All of the features you freely enjoy today on gitpitch.com will continue to be free. Free as in beer. For everyone. Just like today.
I hope understanding this alone will help calm some of your initial concerns.
Having worked full-time and entirely gratis on GitPitch for almost two years now, covering all costs of development, maintenance, and hosting of the service itself on gitpitch.com, I believe I have made a reasonably large contribution to open-source and to the broader developer community on Git. A contribution happily made.
But I now need to pivot some of my efforts towards something that will deliver a sustainable income stream. Without this, I simply can not continue. And if I can not continue, neither can GitPitch.
So my main focus today is to make sure GitPitch can continue to delight an ever growing community of presentation geeks on Git, now and long into the future.
Rather than just flipping-a-switch and saying free-no-more-time-to-pay for the current feature set I am building new and enhanced features. These new features will obviously be above and beyond the current feature set. And this approach ensures I can offer-real-added-value to users who choose to become subscribers. This approach also ensures that I can do this without taking anything away from users who decide to remain on the free plan.
I'm not in a position to announce further details of the subscription model today as it remains a work-in-progress, but as with many offerings online, there will be a number of plans, providing real choice for users.
And of course, one choice will be to simply continue as today, happily and freely enjoying all that GitPitch has to offer right now.
So I'm not sure where you will eventually land when GitPitch Pro rolls out, but at the very least I hope you will continue to enjoy using the service at some level, long into the future.
I recommend you keep an eye on the GitPitch Blog for further announcements. Or of course, you can follow me on Twitter for the same news updates.
Hi David,
While I suspect that it is unlikely that I will be signing up for GitPitch Pro myself on the day it rolls out (other tools currently still seem better suited for my current needs, overall), I really do wish you success in this endeavour: you have made something useful, worthwhile and beautiful, and from my perspective, cosmic justice would be done if you could make it a financial success too.
As an aside, have you considered somehing like a Kickstarter campaign? I mention this because I was really grateful that the Kickstarter campaign for Magit gave me an opportunity to offer financial support for the effort behind one of my most useful tools.
On a sociological note: the author embedded a subtle and easily permanently-suppressable (with visible instructions on how to do it) heads-up note about the existence of the campaign, in the software itself. This got one or two very negative reactions, even though most seemed to thnk it was a reasonable thing to do.
I seem to recall from browsing the issues here, that your need for visibility sometimes causes minor friction with users' needs, so I wish you the best of luck in navigating this minefield.
It's a shame that creators of value must, all too often, divert much of their effort into monetizing their product. I suspect that both you and your users would be happier if you could concentrate on making GitPitch even better, rather than on coming up with schemes for ensuring that you can pay your bills.
I wonder whether society will come up with better solutions to this general problem, any time soon.
Thanks again for this useful feedback and helpful suggestions. Here's to navigating minefields and cosmic justice :)
| gharchive/issue | 2017-10-31T13:43:36 | 2025-04-01T06:44:19.537190 | {
"authors": [
"gitpitch",
"jacg"
],
"repo": "gitpitch/gitpitch",
"url": "https://github.com/gitpitch/gitpitch/issues/123",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
1827835282 | Put the installation of Scarb in the jupyter notebook
Hi thanks for the amazing work!
I meanwhile would like to report an issue. When I am trying to run the final execution block, I found due to that I did not install Scarb, it cannot be run. However, in the notebook I did not find the instruction to install Scarb.
Maybe we can add some arguments at the very beginning?
Hi @gillsgills can you link to a specific tutorial?
This one has the steps on adding all the dependencies required.
https://github.com/gizatechxyz/orion_tutorials/blob/main/mnist_nn/QAT_MNIST_MLP.ipynb
@raphaelDkhn should I make a PR to add the dependency installation steps to the other two tutorials? It might be helpful for people trying them out in random order.
@okhaimie-dev Sure sounds good! Thanks!
Hey @raphaelDkhn @gillsgills I can work on it if it's still pending
@gillsgills Kindly assign this to me.
@gillsgills can i work on this. This is the detailed installation that needs to be added
Install via quick installation script
curl --proto '=https' --tlsv1.2 -sSf https://docs.swmansion.com/scarb/install.sh | sh
install via asdf version manager
asdf plugin add scarb
asdf install scarb latest
asdf global scarb latest
| gharchive/issue | 2023-07-30T07:23:03 | 2025-04-01T06:44:19.619835 | {
"authors": [
"JoshdfG",
"Xaxxoo",
"cpp-phoenix",
"gillsgills",
"okhaimie-dev",
"raphaelDkhn"
],
"repo": "gizatechxyz/orion_tutorials",
"url": "https://github.com/gizatechxyz/orion_tutorials/issues/1",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2205903733 | Two notification questions/requests
This module seems to have been built largely off of the default Weather module. I have another module (MMM-RAIN-MAP) that is triggered off the default weather module's WEATHER_UPDATED notification, which means I have both this module and the default Weather module making API calls to OpenWeatherMap.
Would it be easy to change this module to send and receive the same WEATHER_UPDATED notification that the default Weather module does either in addition to or instead of the the WEATHER_ALERTS_UPDATED notification that it currently uses? I'd be happy to do a little coding and a PR for this, but I wanted to make sure it was possible first, and maybe get pointed in the right direction by someone familiar with this module.
In a somewhat contradictory request, I'm wondering whether it's possible to have this module receive alert notifications from the MMM-OpenWeatherMapForecast module instead of directly from the API. Is there a notification that module sends that could be used? And if not, what do I need to code into that app to get this app to react off a notification from that one? Again, happy to do a little coding if necessary.
I'm just trying to cut down on the number of modules making independent API calls to OpenWeatherMap.
I'm not great with javascript, so I don't know how to easily output or capture the various notifications that are flying around from these apps to see how similar/different they are. If anyone has a suggestion, let me know.
Hi, thanks for the comments. It may be possible to eliminate the need for a separate API call by using the module "notification" system (https://docs.magicmirror.builders/development/notifications.html). What I don't know is if modification to the default weather module is required.
I'm busy with work this week, but will try to find some time to see what the implementation would look like. I saw your other comments relative to Kristjan's project. I will review your PR and see about pulling into main. Thanks.
| gharchive/issue | 2024-03-25T14:32:02 | 2025-04-01T06:44:19.636576 | {
"authors": [
"dathbe",
"gjonesme"
],
"repo": "gjonesme/MMM-WeatherAlerts",
"url": "https://github.com/gjonesme/MMM-WeatherAlerts/issues/4",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2274452540 | Fix to only render objects with visible true
It makes sense that only objects with visible true are rendered.
The old code used traverseVisible when generating the mesh list, but it was missing in the newer code.
Great, thank you for catching this! I'll make a new release soon.
| gharchive/pull-request | 2024-05-02T01:56:05 | 2025-04-01T06:44:19.651784 | {
"authors": [
"KihwanChoi12",
"gkjohnson"
],
"repo": "gkjohnson/three-gpu-pathtracer",
"url": "https://github.com/gkjohnson/three-gpu-pathtracer/pull/632",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
275099340 | optimize x in [0], x in []
This applies the same optimization in use for x in [0 1] to one- and zero-element array literals.
Fix gkz/LiveScript#986.
This is a small optimization, so I'm setting the comment period for this at one week (plus change, to account for Thanksgiving weekend), after which I will merge on Nov 27 if nobody has any concerns.
Why not error for an empty array?
@vendethiel, because there's nothing ambiguous or difficult to understand or handle about the empty array case, IMO. It's a silly thing to write but so is the singleton array case, and as you can see from the files changed, there were two examples of that in our very own lexer. I figure users don't set out to write unnecessarily complex expressions; they write expressions that maybe need the full power of in and then add and remove elements as their code evolves. If a user wants to keep in [x] or in [ ], for symmetry with other parts of the code, for ease of code generation, or simply out of sheer laziness, I don't see why LS should force them to change since the meaning is still perfectly clear.
Contrast with, for example, array slices, where x[] is an error instead of desugaring to []. Here, there is a good reason to error, because there's a discontinuity in the syntax: x[0] is simple element access, not a one-element slice. So it's natural to wonder what the ‘correct’ next element in the sequence x[0 1 2], x[0 1], x[0], ??? should be. Since that case is both silly and ambiguous, it makes sense to not define it and make it a syntax error. But x in [] is quite unambiguous—that code has a meaning today and shouldn't change.
I disagree. It's not about being "difficult to understand"-LS doesn't really care about that. It's about never meaning what you want. The only case you'd have such code is after a refactoring.
Hm. Well, this does give me pause—I wasn't expecting controversy here. I am worried about the idea of taking an expression that isn't confusing and used to work and making it an error without good justification, and I'm not sure that ‘you probably meant to remove this code instead’ is good enough. I'm not aware of much precedent for that level of paternalism here—even the complaint about 0 == \zero is a warning, not an error, and IMO that's a much clearer sign of an issue with your code.
A warning would be totally fine IMHO.
Okay, let's start this up again. Since the compiler warnings proposal didn't generate any objections, I merged it, and now this PR has warnings. I'll give anyone one more week to have second thoughts and then merge on March 14.
| gharchive/pull-request | 2017-11-18T18:26:07 | 2025-04-01T06:44:19.668344 | {
"authors": [
"rhendric",
"vendethiel"
],
"repo": "gkz/LiveScript",
"url": "https://github.com/gkz/LiveScript/pull/987",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
2146429910 | Test_unit_module4_VEP.py only runs correctly when it is in the same directory as module4_VEP_code_only.py
Test_unit_module4_VEP.py only runs correctly when it is in the same directory as module4_VEP_code_only.py. I have leave it in the test folder for now...It only runs when it is placed in the same folder @ in Modules along with module4_VEP_code_only.py. I think we need to set up the linking of Test folder to modules its testing in the Modules directory.
I know it got something to do with having init.py in both Modules and Test folders but I have not grasp the concept yet. And we need to set up so that the Test Folder is one level UP to the modules its testing in the Modules directory.
Add Modules.module4_VEP_code_only.py to complete the path to module can solve the problem, so can be used in our tests from same directory.
Brilliant Sonja. Just brilliant.. with adding Modules. in front of the module4_VEP_code_only.py solve the problem entirely.
now the test can be run anywhere..great
| gharchive/issue | 2024-02-21T10:43:16 | 2025-04-01T06:44:19.673013 | {
"authors": [
"SonjaR-UniMan",
"ayurahman"
],
"repo": "gl186/SprintFinish",
"url": "https://github.com/gl186/SprintFinish/issues/56",
"license": "Unlicense",
"license_type": "permissive",
"license_source": "github-api"
} |
126731771 | No way to extract just the name or version from a gradle task
Instead you get the rather noisy output including both name/code when you probably just want name.
Closed in 8757add6
| gharchive/issue | 2016-01-14T19:40:50 | 2025-04-01T06:44:19.674631 | {
"authors": [
"HPGlade",
"gladed"
],
"repo": "gladed/gradle-android-git-version",
"url": "https://github.com/gladed/gradle-android-git-version/issues/14",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
1794126823 | 🛑 Schoelcher is down
In 2114498, Schoelcher (https://www.mairie-schoelcher.fr) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Schoelcher is back up in 76f007f.
| gharchive/issue | 2023-07-07T19:36:07 | 2025-04-01T06:44:19.693813 | {
"authors": [
"glefait"
],
"repo": "glefait/martinique-public-websites",
"url": "https://github.com/glefait/martinique-public-websites/issues/10608",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1798563253 | 🛑 Schoelcher is down
In 0b178b1, Schoelcher (https://www.mairie-schoelcher.fr) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Schoelcher is back up in 86a0e85.
| gharchive/issue | 2023-07-11T09:54:09 | 2025-04-01T06:44:19.696488 | {
"authors": [
"glefait"
],
"repo": "glefait/martinique-public-websites",
"url": "https://github.com/glefait/martinique-public-websites/issues/10705",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1954535570 | 🛑 Schoelcher is down
In 98402a1, Schoelcher (https://www.mairie-schoelcher.fr) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Schoelcher is back up in 659586b after 35 minutes.
| gharchive/issue | 2023-10-20T15:31:51 | 2025-04-01T06:44:19.699124 | {
"authors": [
"glefait"
],
"repo": "glefait/martinique-public-websites",
"url": "https://github.com/glefait/martinique-public-websites/issues/14158",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1991966839 | 🛑 Schoelcher is down
In a2a5571, Schoelcher (https://www.mairie-schoelcher.fr) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Schoelcher is back up in dc2815f after 5 minutes.
| gharchive/issue | 2023-11-14T04:29:28 | 2025-04-01T06:44:19.701801 | {
"authors": [
"glefait"
],
"repo": "glefait/martinique-public-websites",
"url": "https://github.com/glefait/martinique-public-websites/issues/14987",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2107015541 | 🛑 Tropiques Atrium is down
In 9d6cf80, Tropiques Atrium (https://tropiques-atrium.fr) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Tropiques Atrium is back up in 36b8264 after 6 minutes.
| gharchive/issue | 2024-01-30T05:59:08 | 2025-04-01T06:44:19.704715 | {
"authors": [
"glefait"
],
"repo": "glefait/martinique-public-websites",
"url": "https://github.com/glefait/martinique-public-websites/issues/15844",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2459136989 | 🛑 Tropiques Atrium is down
In c7f8826, Tropiques Atrium (https://tropiques-atrium.fr) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Tropiques Atrium is back up in 72991ac after 5 minutes.
| gharchive/issue | 2024-08-10T15:32:17 | 2025-04-01T06:44:19.707230 | {
"authors": [
"glefait"
],
"repo": "glefait/martinique-public-websites",
"url": "https://github.com/glefait/martinique-public-websites/issues/18139",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1506886571 | 🛑 Schoelcher is down
In c06bffd, Schoelcher (https://www.mairie-schoelcher.fr) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Schoelcher is back up in bed3234.
| gharchive/issue | 2022-12-21T20:40:26 | 2025-04-01T06:44:19.709829 | {
"authors": [
"glefait"
],
"repo": "glefait/martinique-public-websites",
"url": "https://github.com/glefait/martinique-public-websites/issues/2831",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1524060188 | 🛑 Schoelcher is down
In 1391293, Schoelcher (https://www.mairie-schoelcher.fr) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Schoelcher is back up in d3a8ab3.
| gharchive/issue | 2023-01-07T18:50:05 | 2025-04-01T06:44:19.712473 | {
"authors": [
"glefait"
],
"repo": "glefait/martinique-public-websites",
"url": "https://github.com/glefait/martinique-public-websites/issues/3617",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1599768735 | 🛑 Schoelcher is down
In 1de3d89, Schoelcher (https://www.mairie-schoelcher.fr) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Schoelcher is back up in c0860ff.
| gharchive/issue | 2023-02-25T17:05:14 | 2025-04-01T06:44:19.715266 | {
"authors": [
"glefait"
],
"repo": "glefait/martinique-public-websites",
"url": "https://github.com/glefait/martinique-public-websites/issues/5621",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.