added
stringdate 2025-04-01 04:05:38
2025-04-01 07:14:06
| created
timestamp[us]date 2001-10-09 16:19:16
2025-01-01 03:51:31
| id
stringlengths 4
10
| metadata
dict | source
stringclasses 2
values | text
stringlengths 0
1.61M
|
|---|---|---|---|---|---|
2025-04-01T06:38:12.350225
| 2018-09-24T12:05:37
|
363117128
|
{
"authors": [
"fuhbar"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:4709",
"repo": "claranet/spryker-demoshop",
"url": "https://github.com/claranet/spryker-demoshop/pull/35"
}
|
gharchive/pull-request
|
Reintroduce differentiation between application environments
Instead of only relying on one environment, this commit changes run
wrapper and the docker-compose stack to differentiate between
development/production and being able to run them both in parallel.
In other words: Reintroduce the significance of $APPLICATION_ENV
This commit fixes:
#36 - Jenkins image build takes twice the build time
#33 - Reintroduce distinction between development and production environment
#32 - Documentation is gone
#31 - Jenkins jobs constantly throwing errors
|
2025-04-01T06:38:12.362285
| 2023-03-14T19:01:55
|
1624127825
|
{
"authors": [
"jonbarker68"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:4710",
"repo": "claritychallenge/clarity",
"url": "https://github.com/claritychallenge/clarity/pull/221"
}
|
gharchive/pull-request
|
jpb/220-enable-some-pylint-error-checking-in-ci
Adding pylint error checking into the ci part of the precommits. The .pre-commit-config.yaml now has two entries for pylint: one which runs errors only which is compatible with ci, and one that runs as 'repo: local' and uses the .pylintrc which will pick on all the usual style warnings etc, but which is skipped over by the ci.
I have also run pylint with the errors-only flag over the full repo and fixed all issues that were picked up - or added pylint ignore against the ones that appeared to be false-positives.
Ruff: I have also added 'ruff' to the precommit hooks. 'ruff' is a rust-based linter that is super fast (but doesn't yet have quite as many rules as pylint). I fixed a number of issues that the ruff checked picked up on. Ruff might eventually be able to replace pylint but for now we can just keep it in as an extra linter because it takes very little time to run.
@groadabike - there is nothing significant in this PR - mostly just fixing line-too-long type errors and issues that pylint considers 'errors' i.e Exxx codes. Could you please give it a quick review (I don't want to get in the habit of overriding the need for reviews :-). Once this is integrated pylint will pass with the '-E' i.e. errors only flag which can then be included safely in the precommit ci path.
Going to merge this PR if
|
2025-04-01T06:38:12.370696
| 2022-03-03T23:21:43
|
1159054561
|
{
"authors": [
"clariusk",
"sergiojsanabria"
],
"license": "BSD-3-Clause",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:4711",
"repo": "clariusdev/raw",
"url": "https://github.com/clariusdev/raw/issues/4"
}
|
gharchive/issue
|
Understanding IQ data generated by Raw data package with Clarius L7HD052110A0780
Hi, clarius dev team,
Based on the indications of your support team, I forward you this issue from my support request to Clarius.
I am currently acquiring IQ data with the Raw data package of Clarius L7HD052110A0780.
From here, I am trying to go back in synthesizing RF data, basically modulating performing modulation to RF frequencies.
Since it is not clear to me how IQ is calculated in the first place, I am having difficulties to synthesize RF.
Could you let me know about the algorithmic steps, which are followed, to go from RF to IQ?
My main question is how to choose the right carrier frequency "fc". Is this equivalent to the transmit frequency in the .yml file?
I attach my current script in Python, in case this can be valuable for this github project:
` ## from here we resynthesize rf data
UPSAMPLING = 4
fs_rf = fs*UPSAMPLING
#interpolation
rf_line_int = np.zeros(len(iq_line)*UPSAMPLING, dtype=complex)
rf_line_int[0::UPSAMPLING] = UPSAMPLING*iq_line[:]
tcoords_rf = (np.arange(rf_line_int.shape[0]) + toff*UPSAMPLING)/fs_rf
SHIFT_FREQ = 3E6 # To filter data without distortion
rf_line_int_ = rf_line_int * np.exp(2*np.pi*SHIFT_FREQ*1j*tcoords_rf)
#filtering
b, a = butter(8, 1/UPSAMPLING) #filter interpolation replica
rf_line_filt = filtfilt(b, a, rf_line_int_)
#carrier modulation
rf_line_analytic = rf_line_filt * np.exp(2*np.pi*(-SHIFT_FREQ + fs/2+ fc)*1j*tcoords_rf)
rf_line = np.real(rf_line_analytic)
rf_line_fft = fft(rf_line)
rf_line_freq = fftfreq(len(rf_line), 1/fs_rf)`
Attached also an example of the spectrum of my current IQ lines and the corresponding RF synthesized with the code above.
The bandwidth of the probe after modulation (4-13 MHz) is apparently correct.
Thank you and best wishes,
Sergio Sanabria
a few tips:
you'll want to look at the sampling rate in the yaml file to determine any downsampling from 60MHz
demodulation is a sliding frequency as a function of depth, see shallowfreq, deepfreq, and filterdepth, these are the parameters used applied linearly
Hi, @clariusk, thanks for the quick answer.
I have looked into the demodulation parameters you mention.
It seems deepfreq is always 7.5 MHz, and shallowfreq: 11 MHz, with a single filterdepth of 59.6999 mm
My understanding is that at 0 mm the depth carrier is 11 MHz, linearly decreasing down to 7.5 MHz at 59.6999 mm
and staying constant at 7.5 MHz for larger depths. Is this correct? .yml extract below.
Sergio
receive:
- {shallowfreq: 11 MHz, deepfreq: 7.5 MHz, filterdepth: 59.699999999999996 mm}
- {shallowfreq: 11 MHz, deepfreq: 7.5 MHz, filterdepth: 59.699999999999996 mm}
- {shallowfreq: 11 MHz, deepfreq: 7.5 MHz, filterdepth: 59.699999999999996 mm}
- {shallowfreq: 11 MHz, deepfreq: 7.5 MHz, filterdepth: 59.699999999999996 mm}
- {shallowfreq: 11 MHz, deepfreq: 7.5 MHz, filterdepth: 59.699999999999996 mm}
- {shallowfreq: 11 MHz, deepfreq: 7.5 MHz, filterdepth: 59.699999999999996 mm}
- {shallowfreq: 11 MHz, deepfreq: 7.5 MHz, filterdepth: 59.699999999999996 mm}
- {shallowfreq: 11 MHz, deepfreq: 7.5 MHz, filterdepth: 59.699999999999996 mm}
yes, that's exactly correct
|
2025-04-01T06:38:12.409982
| 2023-09-04T20:10:39
|
1880817658
|
{
"authors": [
"bsctl"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:4712",
"repo": "clastix/kamaji",
"url": "https://github.com/clastix/kamaji/pull/371"
}
|
gharchive/pull-request
|
Add CNCF Conformance 1.26 1.27 1.28
close #356
@prometherion should we merge before related PRs are merged in k8s-conformance? All required tests have passed!
|
2025-04-01T06:38:12.417763
| 2020-12-04T20:53:20
|
757397980
|
{
"authors": [
"AlfainCoder",
"Simba14",
"Sorgrum",
"clauderic",
"horprogs",
"ohayojp",
"willadamskeane"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:4713",
"repo": "clauderic/dnd-kit",
"url": "https://github.com/clauderic/dnd-kit/issues/23"
}
|
gharchive/issue
|
Improve auto-scrolling for small containers / large drag sources
Currently if you have a small scroll container and a large drag source, it's nearly impossible to get fine controlled auto-scrolling working properly given the existing auto-scrolling logic.
The getScrollDirectionAndSpeed will need to be updated to take into account the relative size of the item being dragged compared to the scroll container's actual size
Hey, first of all, thank you for your job!
This issue is related to this behaviour, right? Is there any way to improve this?
Hey, first of all, thank you for your job!
This issue is related to this behaviour, right? Is there any way to improve this?
@horprogs that's one of the manifestations of the issue, indeed.
On how this could be improved: Generally I think it's a bit tricky to find a one-size-fits-all strategy for auto-scrolling. I think in the future this should be a bit more extensible, and have either a pointer coordinates based strategy or a strategy that is based on the bounding rect of the dragged element
@clauderic Looking the grid example, I wonder if the auto-scrolling is too aggressive. Right now it seems that the page auto-scrolls when the element is brought down past the halfway point of the page. A simple solution might be to only scroll if the element that is being dragged is nearing the edge of the viewport. Any thoughts?
Having similar issues but with on drag instantly scrolling to the top in a scrollable container.
Any workarounds?
I don't know if this is related to this issue but when you try to scroll with dragged element without DragOverlay element position won't be updated for the scroll duration. You can see it here: https://5fc05e08a4a65d0021ae0bf2-vfebfgjygq.chromatic.com/?path=/docs/presets-sortable-grid--without-drag-overlay
Drag element and try to scroll the page with it. Result: mouse position and element position are different
I'm running into this issue as well - has anyone found an interim workaround?
I'm working on a fix to these issues here: #140
I am using dnd-Kit and sortable, context with an array. In my case auto scrolling is not working in a list while item being dragged.
|
2025-04-01T06:38:12.420607
| 2017-07-11T07:13:47
|
241949792
|
{
"authors": [
"clauderic",
"tcm2029"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:4714",
"repo": "clauderic/react-infinite-calendar",
"url": "https://github.com/clauderic/react-infinite-calendar/issues/130"
}
|
gharchive/issue
|
Unexpected token when import css
I followed the guideline from the readme page and added the following line in my js file (It's a react component)
import "react-infinite-calendar/styles.css";
But I encounter this error when running gulp:
events.js:160
throw er; // Unhandled 'error' event
^
SyntaxError: Unexpected token
Does anyone know to fix it? Because I'm quite new to React
You need to use the appropriate loader.
Assuming you're using webpack, check out https://github.com/webpack-contrib/css-loader
|
2025-04-01T06:38:12.423420
| 2015-03-24T00:10:42
|
63862890
|
{
"authors": [
"claudiowilson",
"dukevomv"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:4715",
"repo": "claudiowilson/LeagueJS",
"url": "https://github.com/claudiowilson/LeagueJS/issues/32"
}
|
gharchive/issue
|
Masteries and Runes possible bug
I seem to have a problem retrieving the masteries and runes after the last update.
I tried all the "LolApi.Summoner..." functions and for some reason getMasteries() and getRunes() does not work for me. They return undefined as result and null as error.
Im using the 'eune' server.
I'll check it out!
fixed it in the latest version :)
Appreciate it! Thank you
|
2025-04-01T06:38:12.429797
| 2017-10-09T22:58:48
|
264049982
|
{
"authors": [
"codingfriend1"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:4716",
"repo": "claustres/quasar-templates",
"url": "https://github.com/claustres/quasar-templates/pull/2"
}
|
gharchive/pull-request
|
Optimized ssr
Reduces file size of SSR build file. Fixes bug where sometimes hot reload server would be blank. Improves speed. Combines both package.json files. Fixes lazy loading of navigation without throwing hydration errors. Now not all modules will be loaded at once.
What kind of change does this PR introduce? (check at least one)
[x] Bugfix
[x] Feature
[ ] Code style update
[ ] Refactor
[ ] Build-related changes
[ ] Other, please describe:
Does this PR introduce a breaking change? (check one)
[x] Yes
[ ] No
If yes, please describe the impact and migration path for existing applications:
The PR fulfills these requirements:
[x] It's submitted to the dev branch and not the master branch
[ ] When resolving a specific issue, it's referenced in the PR's title (e.g. fix: #xxx[,#xxx], where "xxx" is the issue number)
[ ] It's been tested with all Quasar themes
[ ] It's been tested on a Cordova (iOS, Android) app
[ ] It's been tested on a Electron app
[ ] Any necessary documentation has been added or updated in the docs (for faster update click on "Suggest an edit on GitHub" at bottom of page) or explained in the PR's description.
If adding a new feature, the PR's description includes:
[x] A convincing reason for adding this feature (to avoid wasting your time, it's best to open a suggestion issue first and wait for approval before working on it)
This branch has the same issue as my latest pull request from hot-reload-ssr. The issue of mocking the window object and vue router history mode. Please see the other pull request for details.
|
2025-04-01T06:38:12.481516
| 2024-06-06T19:31:22
|
2339015002
|
{
"authors": [
"axl1313",
"kat-wicks"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:4721",
"repo": "cleanlab/cleanlab-studio",
"url": "https://github.com/cleanlab/cleanlab-studio/pull/235"
}
|
gharchive/pull-request
|
change abort() to AuthError
Use AuthError instead of abort() for Invalid API keys to reduce the alert noise from wrong keys
Can you fix CI and format before merging
|
2025-04-01T06:38:12.484259
| 2017-04-11T16:54:14
|
221015610
|
{
"authors": [
"coveralls",
"dlespiau",
"jodh-intel"
],
"license": "apache-2.0",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:4722",
"repo": "clearcontainers/proxy",
"url": "https://github.com/clearcontainers/proxy/pull/29"
}
|
gharchive/pull-request
|
proxy: Restrict the socket and parent directory modes
We don't need those sockets to be read/writable to the whole world. Only
root is enough.
Signed-off-by: Damien Lespiau<EMAIL_ADDRESS>
Coverage remained the same at 69.828% when pulling 8c67a3135a1b42d0241a681c6c9fd88a70d7e214 on dlespiau:20170411-socket-perms into 653860d1e034d12963f2abf06e1f3289f7ed1348 on clearcontainers:master.
lgtm
|
2025-04-01T06:38:12.514946
| 2021-12-21T06:06:40
|
1085471932
|
{
"authors": [
"LandOfBliss",
"ariel11",
"capfei"
],
"license": "CC0-1.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:4723",
"repo": "clearlydefined/curated-data",
"url": "https://github.com/clearlydefined/curated-data/pull/16571"
}
|
gharchive/pull-request
|
oop 0.0.3
Type: Missing
Summary:
oop 0.0.3
Details:
NPM license field indicates NONE
No project link
Way Back Machine had no record
Resolution:
NONE
Affected definitions:
oop 0.0.3
@capfei - I think this might be the repo/license - https://github.com/felixge/node-oop/blob/master/LICENSE. Thoughts on updating this from NONE to MIT?
@ariel11 Did you find that link somewhere in the package? I didn't see it.
@capfei - I dont see the repo link in the package specifically, however both are by "felixge" and there's this issue where someone asked about a license and "felixge" confirmed MIT. I think we can take that as evidence this package is MIT. Thoughts?
|
2025-04-01T06:38:12.520608
| 2017-01-30T21:06:17
|
204129387
|
{
"authors": [
"MarcDee91",
"clearthesky"
],
"license": "bsd-2-clause",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:4724",
"repo": "clearthesky/requests",
"url": "https://github.com/clearthesky/requests/issues/10"
}
|
gharchive/issue
|
Changing User-Agent
At first: Really nice project!
I have tried to change the User-Agent via:
Map<String, String> params = new HashMap<>();
params.put("User-Agent", "Mozilla/5.0 (Windows NT 6.0; rv:2.0.1) Gecko/20100101 Firefox/4.0.1");
System.out.println(net.dongliu.requests.Requests.get("http://localhost:2343/init.aspx").headers(params).send().readToText());
But it doesn't seem to work: The User-Agent is Requests/4.0, Java 1.8.0
So is it possible to change the agent?
This code do change user-agent for me. Which version you are using?
Or you can try
Requests.get(url).userAgent("Mozilla/5.0 (Windows NT 6.0; rv:2.0.1) Gecko/20100101 Firefox/4.0.1")
Thanks for replying!
I am using the latest version from the central maven repo. --> 4.7.1
I have tried your solution, but it doesn't seem to work! ( I have used www.whoishostingthis.com for getting the user-agent)
System.out.println(net.dongliu.requests.Requests.get("http://www.whoishostingthis.com/tools/user-agent").userAgent("Mozilla/5.0 (Windows NT 6.0; rv:2.0.1) Gecko/20100101 Firefox/4.0.1").send().readToText());
Does this work for you ?
The site redirect http://www.whoishostingthis.com/tools/user-agent to http://www.whoishostingthis.com/tools/user-agent/, seems Requests does not use the specified user-agent when send the redirected http request. I will fix this later.
For now, using Requests.get("http://www.whoishostingthis.com/tools/user-agent/") you can get the expected result.
Hey thanks, I will test it when the maven repo is up-to-date!
Now you can try version 4.7.2, set user agent by .userAgent("Mozilla/5.0 (Windows NT 6.0; rv:2.0.1) Gecko/20100101 Firefox/4.0.1")
|
2025-04-01T06:38:12.569865
| 2024-02-23T18:15:50
|
2151576705
|
{
"authors": [
"anagstef"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:4725",
"repo": "clerk/javascript",
"url": "https://github.com/clerk/javascript/pull/2853"
}
|
gharchive/pull-request
|
fix(clerk-js): Use a fixed border radius value for OrganizationAvatar
Description
OrganizationAvatar should maintain the border-radius, even if the user passes a value in borderRadius variable.
Checklist
[ ] npm test runs as expected.
[ ] npm run build runs as expected.
[ ] (If applicable) JSDoc comments have been added or updated for any package exports
[ ] (If applicable) Documentation has been updated
Type of change
[ ] 🐛 Bug fix
[ ] 🌟 New feature
[ ] 🔨 Breaking change
[ ] 📖 Refactoring / dependency upgrade / documentation
[ ] other:
!preview
@desiprisg yes, i'm sure!
|
2025-04-01T06:38:12.661707
| 2020-02-06T13:44:12
|
561021832
|
{
"authors": [
"fernandomm",
"stepozer"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:4726",
"repo": "cloocher/xmlhasher",
"url": "https://github.com/cloocher/xmlhasher/pull/25"
}
|
gharchive/pull-request
|
Fix string_keys to work when it's the only option specified
If string_keys was set without any other options, it wouldn't work since "name" variable is initially a symbol.
Fix #24.
Thank you
|
2025-04-01T06:38:12.663121
| 2024-01-06T12:19:54
|
2068582959
|
{
"authors": [
"andremartinssw",
"bourdakos1"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:4727",
"repo": "cloud-annotations/docusaurus-openapi",
"url": "https://github.com/cloud-annotations/docusaurus-openapi/pull/277"
}
|
gharchive/pull-request
|
Use parseMarkdownFile instead of parseMarkdownString
Fixes #276
Can you resolve the conflicts?
Can you resolve the conflicts?
Done! I hadn't branched from the latest main, sorry :grimacing:
Thanks!
|
2025-04-01T06:38:12.664121
| 2019-08-25T21:40:42
|
484979412
|
{
"authors": [
"imokya",
"vyphan009"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:4728",
"repo": "cloud-annotations/training",
"url": "https://github.com/cloud-annotations/training/issues/112"
}
|
gharchive/issue
|
conversion failed
After training is finished. I got conversion failed.
I got the same problem,
I still got this error, after upgrading to the latest version.
|
2025-04-01T06:38:12.687404
| 2022-03-09T11:59:22
|
1163848913
|
{
"authors": [
"csantanapr",
"phemankita"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:4729",
"repo": "cloud-native-toolkit/automation-modules",
"url": "https://github.com/cloud-native-toolkit/automation-modules/pull/279"
}
|
gharchive/pull-request
|
Adding BOM Catalog
BOM Categories
Create New Cluster
Use Existing Cluster
Bom Catalog UI
Implemented MVP1
Displays the contents category wise
With in the categories, grouping is done according to the cloud provider.
/lgtm
|
2025-04-01T06:38:12.701164
| 2020-03-31T08:43:12
|
590913237
|
{
"authors": [
"asatblurbs",
"paulfantom"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:4730",
"repo": "cloudalchemy/ansible-prometheus",
"url": "https://github.com/cloudalchemy/ansible-prometheus/issues/279"
}
|
gharchive/issue
|
Readme variable prometheus_binaries_local_dir
What did you do?
I run command playbook with this role with variable "prometheus_binaries_local_dir" but it seems playbook could not complete. This need to be done since my environment is located on air gapped network.
Did you expect to see some different?
I expect the playbook complete to install prometheus to the remote host
Environment
Role version:
Insert release version/galaxy tag or Git SHA here
Ansible version information:
ansible 2.9.2 config file = /etc/ansible/ansible.cfg configured module search path = [u'/home/deploy/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules'] ansible python module location = /usr/lib/python2.7/site-packages/ansible executable location = /usr/bin/ansible python version = 2.7.5 (default, Aug 2 2016, 04:20:16) [GCC 4.8.5 20150623 (Red Hat 4.8.5-4)]
Variables:
---
- name : Deploy prometheus
hosts: ams
roles:
- cloudalchemy.prometheus
vars:
prometheus_version: 2.17.0
prometheus_binary_local_dir: /data/monitoring/prometheus/
prometheus_db_dir: /apps/prometheus/database
prometheus_targets:
node:
- targets:
- localhost:9100
labels:
env: ams
Ansible playbook execution Logs:
PLAY [Deploy prometheus] **********************************************************************************
TASK [Gathering Facts] ************************************************************************************
ok: [camelia-ams]
TASK [cloudalchemy.prometheus : Gather variables for each operating system] *******************************
ok: [camelia-ams] => (item=/etc/ansible/roles/cloudalchemy.prometheus/vars/redhat.yml)
TASK [cloudalchemy.prometheus : Assert usage of systemd as an init system] ********************************
ok: [camelia-ams] => {
"changed": false,
"msg": "All assertions passed"
}
TASK [cloudalchemy.prometheus : Get systemd version] ******************************************************
ok: [camelia-ams]
TASK [cloudalchemy.prometheus : Set systemd version fact] *************************************************
ok: [camelia-ams]
TASK [cloudalchemy.prometheus : Assert no duplicate config flags] *****************************************
ok: [camelia-ams] => {
"changed": false,
"msg": "All assertions passed"
}
TASK [cloudalchemy.prometheus : Assert external_labels aren't configured twice] ***************************
ok: [camelia-ams] => {
"changed": false,
"msg": "All assertions passed"
}
TASK [cloudalchemy.prometheus : Set prometheus external metrics path] *************************************
ok: [camelia-ams]
TASK [cloudalchemy.prometheus : Fail when prometheus_config_flags_extra duplicates parameters set by other variables] ***
skipping: [camelia-ams] => (item=storage.tsdb.retention)
skipping: [camelia-ams] => (item=storage.tsdb.path)
skipping: [camelia-ams] => (item=storage.local.retention)
skipping: [camelia-ams] => (item=storage.local.path)
skipping: [camelia-ams] => (item=config.file)
skipping: [camelia-ams] => (item=web.listen-address)
skipping: [camelia-ams] => (item=web.external-url)
TASK [cloudalchemy.prometheus : Get all file_sd files from scrape_configs] ********************************
ok: [camelia-ams]
TASK [cloudalchemy.prometheus : Fail when file_sd targets are not defined in scrape_configs] **************
skipping: [camelia-ams] => (item={'value': [{u'labels': {u'env': u'ams'}, u'targets': [u'localhost:9100']}], 'key': u'node'})
TASK [cloudalchemy.prometheus : Alert when prometheus_alertmanager_config is empty, but prometheus_alert_rules is specified] ***
ok: [camelia-ams] => {
"msg": "No alertmanager configuration was specified. If you want your alerts to be sent make sure to specify a prometheus_alertmanager_config in defaults/main.yml.\n"
}
TASK [cloudalchemy.prometheus : Get latest release] *******************************************************
skipping: [camelia-ams]
TASK [cloudalchemy.prometheus : Set prometheus version to {{ _latest_release.json.tag_name[1:] }}] ********
skipping: [camelia-ams]
TASK [cloudalchemy.prometheus : Get checksum list] ********************************************************
fatal: [camelia-ams]: FAILED! => {"msg": "An unhandled exception occurred while running the lookup plugin 'url'. Error was a <class 'ansible.errors.AnsibleError'>, original message: Failed lookup url for https://github.com/prometheus/prometheus/releases/download/v2.17.0/sha256sums.txt : <urlopen error timed out>"}
NO MORE HOSTS LEFT ****************************************************************************************
Anything else we need to know?:
I realized that in Readme File, you put the variable as "prometheus_binaries_local_dir" while in defaults/main.yml, the correct value is prometheus_binary_local_dir. Once I update my playbook file variable accordingly, it successfully complete. Does the Readme need update?
Yes, readme needs an update, the correct variable name is in defaults/main.yml
Fixed with #280
|
2025-04-01T06:38:12.703278
| 2024-10-08T07:49:36
|
2572384633
|
{
"authors": [
"carlhoerberg",
"spuun"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:4731",
"repo": "cloudamqp/lavinmq",
"url": "https://github.com/cloudamqp/lavinmq/pull/800"
}
|
gharchive/pull-request
|
Uniform log sources
WHAT is this pull request doing?
Create all Log instances from LavinMQ::Log. This will make all log sources prefixed with "lmq" which in turn makes it easy to see if a log comes from a lib (e.g. amqp-client) or from lavinmq itself.
HOW can this pull request be tested?
Run lavin with --debug and look at log output.
Should probably require "./lavinmq/logger" in each file that needs it, rather on a top level. If we want to compile any part independently. Pretty sure vim will complain otherwise, as it won't be able to compile it to check formatting, linting etc.
|
2025-04-01T06:38:12.711355
| 2017-01-17T13:39:12
|
201286543
|
{
"authors": [
"brynh"
],
"license": "apache-2.0",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:4732",
"repo": "cloudant/sync-android",
"url": "https://github.com/cloudant/sync-android/pull/480"
}
|
gharchive/pull-request
|
Add ability to have multiple PeriodicReplicationServices
This PR fixes the following issues:
The abstract class PeriodicReplicationService currently stores information in SharedPreferences under fixed key names. This means that if an app implements multiple concrete PeriodicReplicationServices they will interfere with each other. To fix this, we now store the values in SharedPreferences using keys prefixed with the name of the concrete class implementing the PeriodicReplicationService.
After a reboot, the elapsed time since boot at which the next should occur isn't always updated to reflect the fact the device rebooted. This could lead to replications not being invoked at the expected times.
Rather than storing the time at which the next replication should be triggered, we now store the time at which the last replication occurred. This feels more logical and means we don't have to recalculate the value stored in SharedPreferences after bind/unbind and the only time the value will need adjusting is after a reboot.
Testing
The existing tests have been updated in accordance with these changes. They now verify that the keys used in SharedPreferences are prefixed with the name of the concrete implementation of PeriodicReplicationService and verify that the elapsed time since boot is always updated after the service has been notified of a reboot.
@tomblench I've tried to address your comments in 8bfe34e. Hopefully it makes the tests more easily readable.
|
2025-04-01T06:38:12.729799
| 2018-11-19T09:52:37
|
382134399
|
{
"authors": [
"Ciaro"
],
"license": "apache-2.0",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:4738",
"repo": "cloudcreativity/laravel-json-api",
"url": "https://github.com/cloudcreativity/laravel-json-api/issues/260"
}
|
gharchive/issue
|
Laravel 5.7 support - fresh app install
Hello
When installing on a fresh Laravel 5.7 app instance, I had to change the following in the RouteServiceProvider:
Otherwise I got 404 errors.
Best regards,
Ciaro
Thank you for your reply! I must have overlooked that section.
|
2025-04-01T06:38:12.730968
| 2017-08-16T06:16:32
|
250521824
|
{
"authors": [
"romainr",
"xiaolongge904913"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:4739",
"repo": "cloudera/hue",
"url": "https://github.com/cloudera/hue/pull/580"
}
|
gharchive/pull-request
|
Corrected an error spelling
Corrected an error spelling in doc
Nice!
Would you mind rebasing the PR so that we only have the change in 'docs/manual.txt' showing up?
|
2025-04-01T06:38:12.734408
| 2021-03-15T13:37:05
|
831819274
|
{
"authors": [
"sejoker",
"trevordbc"
],
"license": "bsd-3-clause",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:4740",
"repo": "cloudflare/Cloudflare-WordPress",
"url": "https://github.com/cloudflare/Cloudflare-WordPress/issues/388"
}
|
gharchive/issue
|
Can't Detect Wordpress
My station's site is built on wordpress, the cloudflare plugin is enabled, I've connected it to the API key. This morning I decided to go ahead and upgrade to $20 / month as we're getting a tone of traffic and the site is getting incredibly slow in the mornings. When I go to enable APO - it tells me that it's not a Wordpress website...
Hi @trevordbc please try following the steps: "What if my Cloudflare dashboard says it can't detect the WordPress plugin?".
If it won't help please raise the support ticket and post the number here (I will have a look).
please install the latest version of the plugin and navigate to APO card in the plugin, it should fix automatically your APO settings.
|
2025-04-01T06:38:12.736153
| 2020-12-04T02:53:17
|
756754900
|
{
"authors": [
"jacobbednarz",
"joeles"
],
"license": "bsd-3-clause",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:4741",
"repo": "cloudflare/Cloudflare-WordPress",
"url": "https://github.com/cloudflare/Cloudflare-WordPress/pull/332"
}
|
gharchive/pull-request
|
Add support for configuration via environment vars
Having a the ability to configure credentials via environment vars would be quite appreciated by advanced users.
Storing credentials in a database can be insecure if data is exported to other environments, eg: dev.
Reusing credentials is not flexible if dev environments are running on different domain(s).
awesome stuff! after battling YAML, i managed to test this out and it works great! thank you ❤️
Sure thing! Glad to help.
|
2025-04-01T06:38:12.745377
| 2022-01-12T21:27:21
|
1100804961
|
{
"authors": [
"chocolatkey",
"codewithkristian"
],
"license": "CC-BY-4.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:4742",
"repo": "cloudflare/cloudflare-docs",
"url": "https://github.com/cloudflare/cloudflare-docs/issues/3134"
}
|
gharchive/issue
|
Adding Firebase App Check workers template to docs
I've been using CF workers along with Firebase App Check for a while now in production. This is sort of equivalent to a captcha, and is useful for preventing easy/excessive access to resources without first passing the check (an invisible captcha on the web, or Android/iOS attestation).
I thought it might be useful to share the code in the form of a template, available here https://github.com/chocolatkey/worker-appcheck-template , inspired by the current quickstart examples. Is it worth adding this as one?
Hey @chocolatkey - thanks for the idea! Would you mind opening a PR and adding the template to this page: https://developers.cloudflare.com/workers/get-started/quickstarts#example-projects
File in source is here: https://github.com/cloudflare/cloudflare-docs/blob/production/products/workers/src/content/get-started/quickstarts.md
@codewithkristian Done, here: https://github.com/cloudflare/cloudflare-docs/pull/3365
|
2025-04-01T06:38:12.755959
| 2024-01-20T19:14:52
|
2092209328
|
{
"authors": [
"dario-piotrowicz",
"lcswillems"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:4743",
"repo": "cloudflare/next-on-pages",
"url": "https://github.com/cloudflare/next-on-pages/issues/647"
}
|
gharchive/issue
|
[⚡ Feature]: Supporting Next.js runtime Node.js
Description
I see Cloudflare supporting more and more Node.js modules.
Additional Information
No response
Would you like to help?
[ ] Would you like to help implement this feature?
Hi @lcswillems 👋
Thanks for the issue 🙂, unfortunately I think that supporting the Node.js runtime is quite out of the question, mainly for two reasons:
This adapter works by generating a worker using the Vercel CLI build command (vercel build) output, such output comprises of edge functions and (AWS) lambda functions, the adapter collects the code from the former to build the worker and discards the latter (when discarding the code doesn't generate a loss of functionality, if it does then the adapter's build process fails). The latter is code very AWS lambda specific and not something we can really use (unless we were to deploy it to AWS).
The Cloudflare runtime does support more Node.js modules but not all of them (fs, path, os, etc...), I am not sure if and how we on next-on-pages could support those? should they be no-op? surely that would break in many many use case no? (PS: I have no idea if/how/when they could be included in the runtime either)
The only practical solution I can think that would address both issues above would be to, actually make @cloudflare/next-on-pages work with both runtimes, and for node runtimes have the lambda(s) deployed to AWS, although that would introduce many issues:
we want the code only to run in a worker as that is what provides the best and fastest user experience, mixing this with lambdas, which often could be required to run before the worker code, could quite defeat the purpose here
we'd have to find a way to connect the worker we produce with the AWS lambda(s) we get, making the whole application more complex and with more failure points. This would likely be quite awkward/cumbersome locally.
we'd have to find a way to "version"/"keep in sync" the worker and the AWS lambda(s), which might be very tricky since the two would reside on different platforms.
Additionally (likely a personal opinion)... this project is called next-on-pages as it aims to allow running a Next.js applications on the Cloudflare Pages platform, so having it more generic and include AWS (or whatever else platform) seems out of scope to me and not something that this project was designed to do/created for.
Please let me know what you think of the above 🙂
If you have any potential solutions/ideas please also feel free to throw them my way 😄
Hi @dario-piotrowicz , what about just transforming the AWS lambda into workers? Doesn't seem too hard? And it would fail only if I use node modules not supported by Cloudflare.
@lcswillems as I mentioned above:
This adapter works by generating a worker using the Vercel CLI build command
(vercel build) output, such output comprises of edge functions and (AWS) lambda
functions, the adapter collects the code from the former to build the worker and
discards the latter (when discarding the code doesn't generate a loss of functionality,
if it does then the adapter's build process fails). The latter is code very AWS lambda
specific and not something we can really use (unless we were to deploy it to AWS).
The Vercel build generates complex lambdas which usually contain multiple routes bundled and grouped in an optimized way so that they grow as big as they can without reaching the AWS lambdas max side (50MB). So besides other things here, opposite to what we have with edge functions, there isn't even a 1-to-1 relationship between lambdas and routes.
It is as I said an AWS specific build, so pretty useless for us. I did look into it in depth a while back (looked wether we could infer any useful information/extract any useful code from the lambdas output I mean) which gives me confidence in saying that there, in my opinion, isn't really anything valuable from the lambdas that we could reuse 😕
|
2025-04-01T06:38:12.773690
| 2020-03-30T14:38:52
|
590345104
|
{
"authors": [
"LPardue",
"ctiller",
"xanathar"
],
"license": "bsd-2-clause",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:4744",
"repo": "cloudflare/quiche",
"url": "https://github.com/cloudflare/quiche/issues/430"
}
|
gharchive/issue
|
QUIC datagram extensions
Hi,
Is there any intention/plan of supporting the current draft of datagram extensions as defined by https://tools.ietf.org/html/draft-ietf-quic-datagram-00 ?
In case this is not in the plans, would such a contribution be welcome or is it considered out of scope ?
I'd be interested in being able to use this.
I have a separate fork of quiche with rough DATAGRAM support.
The implementation is pretty straightforward, the trickier part is getting the API correct.
Now that the DATAGRAM extension has been adopted by the QUIC WG there is a stronger case for looking at this more seriously. Having people interested in using it makes a better case for getting feedback and coverage of the code.
Happy to help iterate on the API.
I did a quick rebase of LPardue's forked branch on top of master (https://github.com/xanathar/quiche/commits/datagrams-0.3.0).
This is the current API I extracted diffing the files (as currently implemented, of course):
// In lib.rs:
pub fn Config::set_max_datagram_frame_size(&mut self, v: u64)
pub fn Connection::dgram_send(&mut self, buf: &[u8]) -> Result<()>
pub fn Connection::dgram_recv(&mut self) -> Result<Vec<u8>>
// In webtransport/mod.rs:
pub enum Error
pub const QUICTRANSPORT_ALPN: &[u8]
pub type Result<T>
pub fn QuicTransport::with_transport(conn: &mut super::Connection, origin: &str, path: &str) -> Result<QuicTransport>
pub fn QuicTransport::dgram_send(&mut self, conn: &mut super::Connection, buf: &[u8]) -> Result<()>
// In h3/ffi.rs:
#[no_mangle] pub extern fn quiche_h3_datagram_event_type(ev: &h3::DatagramEvent) -> u32
pub enum DatagramEvent
pub fn dgram_send(&mut self, conn: &mut super::Connection, flow_id: u64, buf: &[u8]) -> Result<()>
pub fn poll_dgram(&mut self, conn: &mut super::Connection) -> Result<(u64, DatagramEvent)>
pub fn process_dgram(&mut self, conn: &mut super::Connection) -> Result<(u64, DatagramEvent)>
I'm currently focusing on the quic part (i.e. no http3 things).
The Connection::dgram_recv API has a signature which is radically different from Connection::stream_recv, but something like pub fn dgram_recv(&mut self, out: &mut [u8]) -> Result<(usize, bool)> would require an extra copy of the buffer (but maybe it would allow an easier interop with C).
As far as C interop is concerned, I drafted a commit with a working C interface to the datagram API: https://github.com/xanathar/quiche/commit/1dbb0b0a3e3c043fe403ae636fa65516b1262ed6 ; feedbacks are welcome (specially around quiche_conn_dgram_free).
One question on the functional aspect: in theory the datagram extension draft says datagrams must be subject to the flow control which in quiche is represented by the send api limits (or whatever is returned by stream_capacity). Does this mean that dgram_send should be limited by the application ? if so, should we expose something more ?
Thanks for looking at this. Ideally the datagram work should be broken into three pieces that reflect the maturity of the standards process: QUIC datagram, H3 datagram and WebTransport. WebTransport is different enough that I should spin it off into a separate PR and we can ignore it for now.
stream_recv allows the caller to read as much or as little as they want. IIRC I decided to make the dgram_recv signature different because it doesn't make sense to read a partial datagram. Given the pains you've had to go through to get this working with the C API I'm not opposed to changing dgram_recv, it would just need some checks to ensure the output buffer is large enough for the received datagram. Providing a large enough buffer is pretty easy because the receiving endpoint set's its own max_datagram_frame_size TP.
I don't know. On one side it would allow for cleaner interop, on the other if one is using datagrams is probably for performance sensitive stuff, and saving a copy could be worth, specially considering that, with datagrams coming out of order, duplicated etc. it's likely that the application cannot specify the final location but needs yet another copy to reassemble the message in any case. I'm seriously on the fence on this.
As an aside, I added some datagram calls to a C++ application I had which is also using a stream and sometimes it stops working; I'm debugging it right now. 99% is some mistake on my side, but it seems it's spinning in while let Some(len) = self.dgram_queue.peek_writable(). Will keep you updated.
I've reasoned a while on the interface which datagrams extensions could have to the application code, and this are a couple of my proposals, please provide feedback.
Scope
This proposal works only around the QUIC part. I have not considered (yet) the H3 part of the problem, nor the WebTransport, to start from the foundations.
Also, I tried to do an effort towards having an API which most closely matches the other APIs in quiche, if no real drawback comes from this (there are trade-offs to be considered).
Rationales
I took into account three points of rationale from days of testing this in (private) real-world prototypes.
First is having some control on the outgoing queue (that is the queue of packets to be sent). This is to handle several scenarios where datagrams which were meant to be sent asap, now have lost their "value"; for example a typical case is an application discarding old stale datagrams when a new datagram of the same type is appended to the queue, or prioritizing datagrams.
Typical cases here are previous video frames after too much time has passed, videogame state snapshots, audio packets after a given delta of time has passed (if no speed-modulation is done) or really any data which is either cumulative and idempotent or has a maximum useful age.
Second, a topic raised in this issue and https://github.com/LPardue/quiche/pull/1 was whether it was best to keep the dgram_recv interface as it was in LPardue's branch (dgram_recv(&mut self) -> Result<Vec<u8>>) or it was better to have it in a form more similar to stream_recv which also allows for an easier to use interface to C at the expense of an extra memory copy (though not allocation) at every call (dgram_recv(&mut self, out: &mut [u8])). Details can be found at https://github.com/LPardue/quiche/pull/1, but the long story short is that there seems to be no or negligible advantages in having one less copy of data, and the cleaner interface is a winner; if what found in that issue proves to be incorrect, it’s a matter of amending dgram_recv.
Third - minor - for naming uniformity with Config::set_max_datagram_frame_size, which cannot be renamed to ..dgram.. because it refers to a specific QUIC transport parameter, I renamed all the methods from dgram_something to datagram_something.
Proposal 1
This proposal is the one with the simplest interface. Everything stays the same as in LPardue's branch, except for minor details:
The ability to configure the size of the datagram queues (i.e. 2 config calls)
A datagram_purge_outgoing to allow applications to remove datagrams waiting to be sent, which are outdated for some reason.
The drawback of this approach, which is far cleaner than my second proposal, is that applications can easily purge outdated packets in the sending queue, but not reprioritize them (other then purging the queue temporarily, and re-adding them in the desired order). Still this can be extended, for example with a datagram_send_urgent or similar, in a second time, when and if it proves to be needed.
/// Sets the maximum size of the datagram sende queue
pub fn Config::set_datagram_send_queue_size(&mut self, size: u64) -> Result<()>;
/// Sets the maximum size of the datagram received queue
pub fn Config::set_datagram_recv_queue_size(&mut self, size: u64) -> Result<()>;
/// Sets the transport's max_datagram_frame_size
pub fn Config::set_max_datagram_frame_size(&mut self, v: u64);
/// Sends a datagram on a connection
pub fn Connection::datagram_send(&mut self, buf: &[u8]) -> Result<()>;
/// Receives a datagram from the connection
pub fn Connection::datagram_recv(&mut self, out: &mut [u8]) -> Result<usize>;
/// Iterates over the outgoing queue and purges datagrams matching the filter
pub fn Connection::datagram_purge_outgoing<F>(&mut self, filter: F) -> Result<()>
where F: FnMut(&[u8]) -> bool;
Proposal 2
This proposal is the one with the most control over the sending queue. An object with the DatagramsOutgoingQueue trait can be passed by the application to customize the behavior of the sending queue to its preference.
/// An implementation may be passed by the application, otherwise the default one
/// will be used.
pub trait DatagramsOutgoingQueue
where
Self: std::fmt::Debug,
{
/// Adds a datagram to the outgoing queue
pub fn push_datagram(&mut self, data: &[u8]) -> Result<()> ;
/// Returns the size of the next outgoing datagram to be sent, or None.
pub fn peek_datagram(&self) -> Option<usize>;
/// Returns the next buffer to be sent (or None).
pub fn pop_datagram(&mut self) -> Result<&[u8]>;
}
/// Sets the outgoing datagram queue to be used, the default one is used otherwise
pub fn Config::set_datagram_send_queue(&mut self, &queue: DatagramsOutgoingQueue);
/// Sets the maximum size of the datagram received queue
pub fn Config::set_datagram_recv_queue_size(&mut self, size: u64) -> Result<()>;
/// Sets the transport's max_datagram_frame_size
pub fn Config::set_max_datagram_frame_size(&mut self, v: u64);
/// Sends a datagram on a connection, queuing it to the current DatagramsOutgoingQueue
pub fn Connection::datagram_send(&mut self, buf: &[u8]) -> Result<()>;
/// Receives a datagram from the connection
pub fn Connection::datagram_recv(&mut self, out: &mut [u8]) -> Result<usize>;
As said, feedback is very welcome, as is any alternative, etc.
@xanathar proposal 2 is an interesting alternative. How do you see an application loop working with this? Something like:
conn.datagram_send("foo");
conn.datagram_send("bar");
conn.datagram_send("baz");
conn.send(...) which internally will get around to calling self.DatagramsOutgoingQueue.pop()
Do you imagine an ability to purge items in DatagramsOutgoingQueue, which may be public or private depending on need?
yes, proposal 2 is exactly how you described; the ability to purge items in DatagramsOutgoingQueue would be there, since the object with the DatagramsOutgoingQueue trait would be provided by the application itself (with a default implementation which doesn't allow it, for those who don't need that level of control).
|
2025-04-01T06:38:12.780492
| 2022-02-02T23:59:57
|
1122508606
|
{
"authors": [
"jacobbednarz",
"nickysemenza"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:4745",
"repo": "cloudflare/terraform-provider-cloudflare",
"url": "https://github.com/cloudflare/terraform-provider-cloudflare/pull/1424"
}
|
gharchive/pull-request
|
update validation records/errors for custom_hostname and certificate_pack
resource/custom_hostname: validation tokens are now an array (`validation_records`) instead of a top level, but the only top level record that was previously here was for cname validation, txt/http/email were entirely missing.
resource/custom_hostname: also adds missing `validation_errors`, and `certificate_authority`
resource/certificate_pack: adds `validation_errors` and `validation_records` with same format as custom hostnames.
relies of lib behavior added in https://github.com/cloudflare/cloudflare-go/pull/796
@nickysemenza could you move the CHANGELOG entry to the file as the documentation mentions? that will allow it post merge to be picked up correctly.
acceptance tests are green
TF_ACC=1 go test $(go list ./...) -v -run "^TestAccCloudflareCustomHostname" -count 1 -parallel 1 -timeout 120m -parallel 1
? github.com/cloudflare/terraform-provider-cloudflare [no test files]
=== RUN TestAccCloudflareCustomHostnameFallbackOrigin
--- PASS: TestAccCloudflareCustomHostnameFallbackOrigin (13.94s)
=== RUN TestAccCloudflareCustomHostnameFallbackOriginUpdate
--- PASS: TestAccCloudflareCustomHostnameFallbackOriginUpdate (23.16s)
=== RUN TestAccCloudflareCustomHostname_Basic
=== PAUSE TestAccCloudflareCustomHostname_Basic
=== RUN TestAccCloudflareCustomHostname_WithCustomOriginServer
=== PAUSE TestAccCloudflareCustomHostname_WithCustomOriginServer
=== RUN TestAccCloudflareCustomHostname_WithHTTPValidation
=== PAUSE TestAccCloudflareCustomHostname_WithHTTPValidation
=== RUN TestAccCloudflareCustomHostname_WithCustomSSLSettings
=== PAUSE TestAccCloudflareCustomHostname_WithCustomSSLSettings
=== RUN TestAccCloudflareCustomHostname_Update
=== PAUSE TestAccCloudflareCustomHostname_Update
=== RUN TestAccCloudflareCustomHostname_WithNoSSL
=== PAUSE TestAccCloudflareCustomHostname_WithNoSSL
=== RUN TestAccCloudflareCustomHostname_UpdatingZoneForcesNewResource
=== PAUSE TestAccCloudflareCustomHostname_UpdatingZoneForcesNewResource
=== RUN TestAccCloudflareCustomHostname_Import
=== PAUSE TestAccCloudflareCustomHostname_Import
=== CONT TestAccCloudflareCustomHostname_Basic
--- PASS: TestAccCloudflareCustomHostname_Basic (10.22s)
=== CONT TestAccCloudflareCustomHostname_Update
--- PASS: TestAccCloudflareCustomHostname_Update (21.45s)
=== CONT TestAccCloudflareCustomHostname_Import
--- PASS: TestAccCloudflareCustomHostname_Import (14.20s)
=== CONT TestAccCloudflareCustomHostname_UpdatingZoneForcesNewResource
--- PASS: TestAccCloudflareCustomHostname_UpdatingZoneForcesNewResource (19.54s)
=== CONT TestAccCloudflareCustomHostname_WithNoSSL
--- PASS: TestAccCloudflareCustomHostname_WithNoSSL (10.91s)
=== CONT TestAccCloudflareCustomHostname_WithHTTPValidation
--- PASS: TestAccCloudflareCustomHostname_WithHTTPValidation (11.00s)
=== CONT TestAccCloudflareCustomHostname_WithCustomSSLSettings
--- PASS: TestAccCloudflareCustomHostname_WithCustomSSLSettings (10.29s)
=== CONT TestAccCloudflareCustomHostname_WithCustomOriginServer
--- PASS: TestAccCloudflareCustomHostname_WithCustomOriginServer (10.79s)
PASS
ok github.com/cloudflare/terraform-provider-cloudflare/cloudflare 146.054s
? github.com/cloudflare/terraform-provider-cloudflare/tools/cmd/changelog-check [no test files]
? github.com/cloudflare/terraform-provider-cloudflare/version [no test files]
|
2025-04-01T06:38:12.789084
| 2022-10-07T14:19:22
|
1401259857
|
{
"authors": [
"jacobbednarz",
"nickysemenza",
"will-bluem-olo"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:4746",
"repo": "cloudflare/terraform-provider-cloudflare",
"url": "https://github.com/cloudflare/terraform-provider-cloudflare/pull/1953"
}
|
gharchive/pull-request
|
Issue 1840 Add custom_hostname wait_for_active_status
As shown in https://github.com/cloudflare/terraform-provider-cloudflare/issues/1840, it can be problematic to create required validation records in the same terraform apply run because the custom_hostname resource completes creation before the required validation records are present on the resource.
This pull adds a wait_for_active_status flag similar to the flag introduced in https://github.com/cloudflare/terraform-provider-cloudflare/pull/1567.
I was NOT able to run the acceptance tests because I do not have a suitable Cloudflare account to do so. However I did test this with my currently blocked configuration and it resolved the issues I was seeing.
i've gone back and had a look at the initial issue however i don't think this actually addresses the problem raised.
in the initial ticket, ownership_verification and ownership_verification_http are both set from the initial creation call however, the initial issue is trying to use the ssl.validation_records object for validation which this PR will never check so i'm unsure how this PR is fixing your issue.
@nickysemenza are you able to confirm which config block should be checked for the manual validation records? perhaps the original issue is just using the wrong fields.
interesting. it seemed from my testing that the ssl.validation_records were set once the hostname hit active status. this PR worked well for me with my local testing. i'll attach a sample tf file that i was testing with. i'll admit i'm largely unfamiliar with the underlying cloudflare api so if there is a better way to accomplish this i'll take that up instead.
main.tf.txt
testing_results.txt
the SSL sub-object will get its validation_records set once it transitions from initializing->pending_validation (which, if the parent hostname passes validation on the first try, will likely happen around the same time as the hostname transitioning from pending -> active. (The ssl validation records require a call to the certificate authority to happen in the background) wheras the custom hostname validation records are generated in-house).
So as for the issue described in #1840, waiting until resource.cloudflare_custom_hostname.test.ssl.0.validation_records.0.txt_value has a value can be accomplished by waiting until ssl.status== "pending_validation" or something along those lines - perhaps wait_for_ssl_pending_validation would be more appropriate?
see https://developers.cloudflare.com/ssl/reference/certificate-statuses/ and https://developers.cloudflare.com/cloudflare-for-platforms/cloudflare-for-saas/domain-support/hostname-verification/#verification-statuses for references on statuses
thanks for the info - i've adjusted the pr to reflect that input
acceptance tests are passing
TF_ACC=1 go test $(go list ./...) -v -run "^TestAccCloudflareCustomHostname_" -count 1 -parallel 1 -timeout 120m -parallel 1
? github.com/cloudflare/terraform-provider-cloudflare [no test files]
=== RUN TestAccCloudflareCustomHostname_Basic
=== PAUSE TestAccCloudflareCustomHostname_Basic
=== RUN TestAccCloudflareCustomHostname_WaitForActive
=== PAUSE TestAccCloudflareCustomHostname_WaitForActive
=== RUN TestAccCloudflareCustomHostname_WithCustomOriginServer
=== PAUSE TestAccCloudflareCustomHostname_WithCustomOriginServer
=== RUN TestAccCloudflareCustomHostname_WithHTTPValidation
=== PAUSE TestAccCloudflareCustomHostname_WithHTTPValidation
=== RUN TestAccCloudflareCustomHostname_WithCustomSSLSettings
=== PAUSE TestAccCloudflareCustomHostname_WithCustomSSLSettings
=== RUN TestAccCloudflareCustomHostname_Update
=== PAUSE TestAccCloudflareCustomHostname_Update
=== RUN TestAccCloudflareCustomHostname_WithNoSSL
=== PAUSE TestAccCloudflareCustomHostname_WithNoSSL
=== RUN TestAccCloudflareCustomHostname_UpdatingZoneForcesNewResource
=== PAUSE TestAccCloudflareCustomHostname_UpdatingZoneForcesNewResource
=== RUN TestAccCloudflareCustomHostname_Import
=== PAUSE TestAccCloudflareCustomHostname_Import
=== CONT TestAccCloudflareCustomHostname_Basic
--- PASS: TestAccCloudflareCustomHostname_Basic (8.25s)
=== CONT TestAccCloudflareCustomHostname_Update
--- PASS: TestAccCloudflareCustomHostname_Update (14.25s)
=== CONT TestAccCloudflareCustomHostname_Import
--- PASS: TestAccCloudflareCustomHostname_Import (10.16s)
=== CONT TestAccCloudflareCustomHostname_UpdatingZoneForcesNewResource
--- PASS: TestAccCloudflareCustomHostname_UpdatingZoneForcesNewResource (15.28s)
=== CONT TestAccCloudflareCustomHostname_WithNoSSL
--- PASS: TestAccCloudflareCustomHostname_WithNoSSL (7.64s)
=== CONT TestAccCloudflareCustomHostname_WithHTTPValidation
--- PASS: TestAccCloudflareCustomHostname_WithHTTPValidation (7.60s)
=== CONT TestAccCloudflareCustomHostname_WithCustomSSLSettings
--- PASS: TestAccCloudflareCustomHostname_WithCustomSSLSettings (12.76s)
=== CONT TestAccCloudflareCustomHostname_WithCustomOriginServer
--- PASS: TestAccCloudflareCustomHostname_WithCustomOriginServer (8.60s)
=== CONT TestAccCloudflareCustomHostname_WaitForActive
--- PASS: TestAccCloudflareCustomHostname_WaitForActive (10.60s)
PASS
ok github.com/cloudflare/terraform-provider-cloudflare/internal/provider 95.440s
thanks for this one @will-bluem-olo! we appreciate the effort you've put into this one -- you're first contribution at that! 🎉
|
2025-04-01T06:38:12.790609
| 2022-10-25T15:10:49
|
1422629324
|
{
"authors": [
"jasnell",
"vlovich"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:4747",
"repo": "cloudflare/workerd",
"url": "https://github.com/cloudflare/workerd/pull/127"
}
|
gharchive/pull-request
|
Small typo fix in comment
Deafult -> Default
@vlovich ... need you to ack the CLAAssistant here...
recheck
|
2025-04-01T06:38:12.800782
| 2022-11-29T19:39:12
|
1468600673
|
{
"authors": [
"lucasnad27",
"rozenmd",
"vlovich"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:4748",
"repo": "cloudflare/wrangler2",
"url": "https://github.com/cloudflare/wrangler2/issues/2309"
}
|
gharchive/issue
|
🐛 BUG: wrangler r2 object CLI
What version of Wrangler are you using?
2.5.0
What operating system are you using?
Mac
Describe the Bug
To reproduce:
Create a new r2 bucket called foobar
Create a README.md
Attempt to upload the README.md file to foobar with the following command:
wrangler r2 object put foobar/README.md -f ./README.md
❯ wrangler r2 object put foobar/README.md -f ./README.md
⛅️ wrangler 2.5.0
-------------------
Creating object "README.md" in bucket "foobar".
✘ [ERROR] Failed to fetch /accounts/<account id>/r2/buckets/foobar/objects/README.md - 404: Not Found);
If you think this is a bug then please create an issue at https://github.com/cloudflare/wrangler2/issues/new/choose
Running this command outputs the following error:
Now I don't think this is a bug and is likely PEBKAC. Hopefully, someone can point me in the right direction, and I'll close this issue. But this is not the only time I've been frustrated by the Cloudflare developer experience. All that I want to do is copy a file from my local system to R2. The cli wasn't intuitive enough for me to guess--I assumed it would be similar to the cp command--but the -h was well done. After trying many variations of this command and still receiving similar errors, I gave up and went to the docs. I'm not using workers in any capacity, but I have to go to the workers help to get any information on wrangler, a little confusing, but maybe I'm an outlier here. I get to the commands section and see r2 bucket documentation. But nothing for r2 object. I am, again, frustrated. I try the official R2 documentation, also a little confusing going to multiple product pages. No examples using wrangler, just 3rd party libs. Hence, filing this bug.
I love what y'all are doing, and I like the direction Cloudflare is heading with its developer-focused products. Still, I've encountered many paper cuts, primarily in the fragmented documentation across multiple product umbrellas. I consider pages, workers, r2, images etc., all to be bleeding-edge products, and I constantly advocate for their use over more traditional software development patterns, but these issues make that more difficult.
Hi @lucasnad27, I'm so sorry you experienced this issue.
We'll update the wrangler command docs to document the wrangler r2 object commands, and I've raised with the team whether it still makes sense to keep Wrangler underneath Workers in the docs.
thanks @rozenmd Appreciate the quick response.
Should I keep this issue open until I see an update to the docs? Or close and re-open if I run into issues once those docs are available?
@lucasnad27 I'll make a PR into https://github.com/cloudflare/cloudflare-docs resolving the issue, which will automatically close this issue and the one I just filed in that repo (https://github.com/cloudflare/cloudflare-docs/issues/6894)
@lucasnad27 i think you're using the command correctly. Silly question though. Does the bucket foobar exist for you? If you email my GitHub username at cloudflare dot com with your account ID I can take a deeper look for you (just reference this GitHub issue for context).
Sorry for the frustration and hopefully we can get you unblocked.
As an alternative, there are other command-line tools out there that are more mature. Rclone in particular is quite popular.
No silly questions! :) foobar was a placeholder for a bucket created by my CF account.
However, I just tried reproducing the error using the same command I posted earlier (wrangler r2 object put foobar/README.md -f ./README.md) and it worked! I thought there might be a consistency issue with my last failed attempt, so I tried creating a new bucket (both from the CLI & UI) and was able to upload the README.md file without issue 🤷
Thanks for the tip re: rclone. If I do anything substantive from the CLI, I'll keep that in mind.
|
2025-04-01T06:38:12.823710
| 2017-11-06T12:14:13
|
271455205
|
{
"authors": [
"fluffle",
"johnsonj",
"knyar"
],
"license": "apache-2.0",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:4749",
"repo": "cloudfoundry-community/stackdriver-tools",
"url": "https://github.com/cloudfoundry-community/stackdriver-tools/pull/144"
}
|
gharchive/pull-request
|
Config options for metric prefix and director name.
The nozzle currently places all the metrics it derives from firehose
data in the root of Stackdriver's custom metric namespace. This means
there's a relatively large chance that these names could collide with
others that were not created by the nozzle. To mitigate this, a config
option "metric_path_prefix" (that defaults to "firehose") is added.
After this commit, Stackdriver metric names will be of the form:
custom.googleapis.com/PREFIX/origin.Name
custom.googleapis.com/firehose/gorouter.total_requests
Running multiple PCF instances in the same GCP project will result in
metrics from all instances being confused with each other. Since the
nozzle runs on BOSH, another config option "bosh_director_name" (that
defaults to "cf") has been added. This sets the value of a static
"director" label added to every metric exported by the nozzle. Setting
this to different values for each PCF instance in a project (e.g. the
GCP region, if running one PCF instance per region) allows PCF metrics
to be distinguished from one another.
This change is
I'll rebase this onto develop after PR #136 is in, which ought to make it easier to review. I wanted to base this PR off that one because it made logical sense to do so, but I'm not github-savvy enough to know how to exclude the first commit from the second PR, if such a thing is even possible.
Review status: 0 of 10 files reviewed at latest revision, 1 unresolved discussion.
src/stackdriver-nozzle/config/config.go, line 69 at r1 (raw file):
MetricsBufferSize int `envconfig:"metrics_buffer_size" default:"200"`
MetricPathPrefix string `envconfig:"metric_path_prefix" default:"firehose"`
BoshDirectorName string `envconfig:"bosh_director_name" default:"cf"`
What I actually had in mind is slightly more flexible: instead of having a single hard-coded label with user-supplied value, what if we could just accept an arbitrary number of key/value pairs that are passed as labels for all metrics? This will allow users to attach other instance-specific metadata to their metrics. There is obviously a risk that providing too many labels will exceed 10 label limit, but this will be clearly visible in the logs (and we could document this as a caveat). What do you think?
Comments from Reviewable
Review status: 0 of 10 files reviewed at latest revision, 1 unresolved discussion.
src/stackdriver-nozzle/config/config.go, line 69 at r1 (raw file):
Previously, knyar (Anton Tolchanov) wrote…
What I actually had in mind is slightly more flexible: instead of having a single hard-coded label with user-supplied value, what if we could just accept an arbitrary number of key/value pairs that are passed as labels for all metrics? This will allow users to attach other instance-specific metadata to their metrics. There is obviously a risk that providing too many labels will exceed 10 label limit, but this will be clearly visible in the logs (and we could document this as a caveat). What do you think?
I did originally do that, but stepped back, because: why?
I don't see any value in attaching >1 label to all metrics from a PCF instance, because it should be possible to derive any other values you might want to for that instance based on the one label. (I guess SD doesn't quite have JoinWithLiteralTable yet, but that's what I'm thinking of.)
This also enforces the label name to be "director" which is what the PCF folks want to standardise on as the label to differentiate between PCF instances, according to David Laing. Only allowing users to change the value enforces naming consistency.
Then there's the risk of exceeding the 10 label limit. There's some headroom there but I am not a fan of leaving guns footwards like that...
Comments from Reviewable
@fluffle - I rebased your changed on develop and force pushed. I don't know if that's proper etiquette or not
Can you also update: tile.yml.erb, jobs/stackdriver-nozzle/spec with the new fields?
Reviewed 7 of 10 files at r1.
Review status: all files reviewed at latest revision, all discussions resolved.
Comments from Reviewable
Review status: all files reviewed at latest revision, 1 unresolved discussion, all commit checks successful.
jobs/stackdriver-nozzle/templates/stackdriver-nozzle-ctl.erb, line 38 at r2 (raw file):
export METRICS_BUFFER_SIZE=<%= p('nozzle.metrics_buffer_size', '200') %>
export METRIC_PATH_PREFIX=<%= p('nozzle.metric_path_prefix', 'firehose') %>
export BOSH_DIRECTOR_NAME=<%= p('nozzle.bosh_director_name', 'cf') %>
Can you also update: tile.yml.erb, jobs/stackdriver-nozzle/spec with the new fields?
(reposting as comment so it can be resolved)
Comments from Reviewable
I had to force-push updates anyway. I think @knyar's comments may have been on changes that were in the commit from #136, not sure. Hope I've got everything now!
Review status: 7 of 9 files reviewed at latest revision, 1 unresolved discussion.
jobs/stackdriver-nozzle/templates/stackdriver-nozzle-ctl.erb, line 38 at r2 (raw file):
Previously, johnsonj (Jeff Johnson) wrote…
Can you also update: tile.yml.erb, jobs/stackdriver-nozzle/spec with the new fields?
(reposting as comment so it can be resolved)
Aha, thank you. I should have done a recursive grep to see if there was more wiring to be connected up :-)
Comments from Reviewable
Reviewed 3 of 3 files at r3.
Review status: all files reviewed at latest revision, 1 unresolved discussion, some commit checks failed.
jobs/stackdriver-nozzle/templates/stackdriver-nozzle-ctl.erb, line 38 at r2 (raw file):
Previously, fluffle (Alex Bee) wrote…
Aha, thank you. I should have done a recursive grep to see if there was more wiring to be connected up :-)
Thanks!
Comments from Reviewable
Reviewed 8 of 8 files at r4.
Review status: all files reviewed at latest revision, all discussions resolved.
Comments from Reviewable
|
2025-04-01T06:38:12.826683
| 2016-09-08T02:02:33
|
175647897
|
{
"authors": [
"cppforlife",
"evandbrown"
],
"license": "apache-2.0",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:4750",
"repo": "cloudfoundry-incubator/bosh-google-cpi-release",
"url": "https://github.com/cloudfoundry-incubator/bosh-google-cpi-release/issues/78"
}
|
gharchive/issue
|
support global cpi configuration for tags
to allow tagging of all vms with specific set of tags to enable setting security groups. global tags would be added on top of tags specified via env arg in create_vm. wdyt about google.default_tags: [tag1, tag2]?
cc @evandbrown @dsboulder
@dsboulder @mrdavidlaing do default_tags give you capabilities that 'tag with job name' doesn't?
Otherwise I'm fine with this and happy to add if it's useful.
@evandbrown this will be useful for setting custom tags director wide (including director), e.g. staging-blah. this global cpi configuration (not in vm cloud properties) will enforce tag setting across all machines.
|
2025-04-01T06:38:12.828961
| 2015-01-30T17:29:14
|
56061212
|
{
"authors": [
"brannon",
"mosoto"
],
"license": "apache-2.0",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:4751",
"repo": "cloudfoundry-incubator/if_warden",
"url": "https://github.com/cloudfoundry-incubator/if_warden/issues/9"
}
|
gharchive/issue
|
Tests fail occasionally with "No mapping between account names and security IDs was done."
The LocalPrincipalManager tests fail occasionally with the above error.
I believe it is related to some eventual consistency in Windows with creating a new user, and then trying to add that user to a security group. The error suggests that the failure was in mapping a username to a SID.
We should consider polling (in the code) to ensure that the user exists and can be mapped to a SID, before continuing and trying to use the new user.
Here is an example stack trace:
Test Name: IronFoundry.Container.Utilities.LocalPrincipalManagerTests.AddedUserAppearsInWardenGroup
Test FullName: IronFoundry.Container.Utilities.LocalPrincipalManagerTests.AddedUserAppearsInWardenGroup
Test Source: c:\git\if\if_warden\IronFoundry.Container.Test\Utilities\LocalPrincipalManagerTests.cs : line 62
Test Outcome: Failed
Test Duration: 0:00:00.164
Result Message: System.Runtime.InteropServices.COMException : No mapping between account names and security IDs was done.
Result StackTrace:
at System.DirectoryServices.PropertyValueCollection.PopulateList()
at System.DirectoryServices.PropertyValueCollection..ctor(DirectoryEntry entry, String propertyName)
at System.DirectoryServices.PropertyCollection.get_Item(String propertyName)
at System.DirectoryServices.AccountManagement.QbeMatcher.Matches(DirectoryEntry de)
at System.DirectoryServices.AccountManagement.SAMQuerySet.MoveNext()
at System.DirectoryServices.AccountManagement.FindResultEnumerator`1.MoveNext()
at System.DirectoryServices.AccountManagement.PrincipalSearcher.FindOne()
at IronFoundry.Container.Utilities.LocalPrincipalManager.AddUserToGroup(PrincipalContext context, String groupName, UserPrincipal user) in c:\git\if\if_warden\IronFoundry.Container.Shared\Utilities\LocalPrincipalManager.cs:line 151
at IronFoundry.Container.Utilities.LocalPrincipalManager.InnerCreateUser(String userName) in c:\git\if\if_warden\IronFoundry.Container.Shared\Utilities\LocalPrincipalManager.cs:line 136
at IronFoundry.Container.Utilities.LocalPrincipalManager.CreateUser(String userName) in c:\git\if\if_warden\IronFoundry.Container.Shared\Utilities\LocalPrincipalManager.cs:line 59
at IronFoundry.Container.Utilities.LocalPrincipalManagerTests.AddedUserAppearsInWardenGroup() in c:\git\if\if_warden\IronFoundry.Container.Test\Utilities\LocalPrincipalManagerTests.cs:line 63
Error Code for this is: 0x80070534
|
2025-04-01T06:38:12.834531
| 2018-08-10T17:22:30
|
349595427
|
{
"authors": [
"iainsproat",
"jasonkeene"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:4752",
"repo": "cloudfoundry-incubator/kubo-release",
"url": "https://github.com/cloudfoundry-incubator/kubo-release/issues/241"
}
|
gharchive/issue
|
Do not fail apply-specs if addons-spec is empty
Is your feature request related to a problem? Please describe.
kubectl fails when you apply a manifest that does not contain any objects:
$ echo "---" | kubectl apply -f -
error: no objects passed to apply
If a user provides an empty manifest, for instance --- or whitespace. That will fail the apply-specs errand. We ran into this because OpsMan can not concatenate two values without some sort of additional characters. Additionally, if a user provides --- it will fail which is not obvious.
Describe the solution you'd like
When applying specs one of three things can be done:
Parse the yaml first and see if contains data. There are already some empty value checks being done here: https://github.com/cloudfoundry-incubator/kubo-release/blob/42f962bb594aa3092094393560cdd38e571af698/jobs/apply-specs/spec
Apply the specs and if the error matches "error: no objects passed to apply" do not fail the errand.
Just trim whitespace from both sides of the document before doing the empty check. This would solve our issue but users that provide --- would still fail the errand.
Describe alternatives you've considered
We spent a day trying to get OpsMan to craft either null or "" when concatenating two empty strings. We were not able to find a way to accomplish this. Our workaround is to always have an namespace object created so that the job doesn't fail for this reason.
Additional context
Pretty straight forward issue.
Thanks for highlighting this @jasonkeene
On first glance, we think the current behaviour is appropriate. The error highlights to the user that they have passed malformed/empty spec to CFCR/kubectl, and that they would probably want to investigate as to why they are passing empty files.
I'm going to close this for now, but feel free to discuss further in the comments if you wish.
cc @karampok
Fair enough. We've worked around this on our end by always appending a namespace.
|
2025-04-01T06:38:12.838291
| 2019-09-20T14:30:02
|
496385258
|
{
"authors": [
"KlapTrap",
"nwmac",
"richard-cox"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:4753",
"repo": "cloudfoundry-incubator/stratos",
"url": "https://github.com/cloudfoundry-incubator/stratos/pull/3899"
}
|
gharchive/pull-request
|
Fix npm audit vulnerabilities
result of npm audit fix and manually removing stratos-merge-dirs
We should park this until we upgrade to Angular 8. The issue might come out in the wash.
Angular upgrade #3920
@KlapTrap This was raised as a concern by the community. Is there any reason why this shouldn't be merged other than waiting for Angular 8?
@richard-cox @KlapTrap I think we should get this in - but I don't understand the changes to the package lock file.
Many dependencies have changed from being explicitly pinned, e.g. "1.9.3" to "^1.9.3" - it would be good to understand why, so we don't have this flip-flopping with PRs.
I agree that we should fix this, I was just going to wait for the angular 8 upgrade. ng updatge will get all of the relevant dependancies to the correct & compatible versions.
Having said that, I've done some of the angular 8 migration here; https://github.com/cloudfoundry-incubator/stratos/pull/3950 and It's going to take a while to manually migrate some of the code. So, with that in mind, I don't mind this being merged once everyone is happy.
|
2025-04-01T06:38:12.846469
| 2016-04-21T15:44:00
|
150112759
|
{
"authors": [
"cppforlife",
"voelzmo"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:4754",
"repo": "cloudfoundry/bosh",
"url": "https://github.com/cloudfoundry/bosh/issues/1234"
}
|
gharchive/issue
|
registry hangs and is unresponsive: bosh deploy fails with execution expired
We saw recently a few deployments failing with the error of execution expired during writing the settings for a VM into the registry. The registry stopped responding at all, there was nothing in the logs.
Even when being on the Director itself, calling the registry didn't work. Something like
# curl http://localhost:25777/instances/bla/settings
should fail with an error saying instance 'bla' not found, but just hangs until it runs into a read timeout.
So we attached a gdb and looked at the backtrace, seeing that excon couldn't open the ssl_socket and blocked forever in the rescue. Note that the documentation of IO.select states this piece of code as an explicit example on how to implement a blocking read
The gdb stacktracke:
(gdb) call (void) rb_backtrace()
from /var/vcap/packages/registry/bin/bosh-registry:16:in `<main>'
from /var/vcap/packages/registry/bin/bosh-registry:16:in `load'
from /var/vcap/packages/registry/gem_home/ruby/2.1.0/gems/bosh-registry-1.3202.0/bin/bosh-registry:28:in `<top (required)>'
from /var/vcap/packages/registry/gem_home/ruby/2.1.0/gems/bosh-registry-1.3202.0/lib/bosh/registry/runner.rb:18:in `run'
from /var/vcap/packages/registry/gem_home/ruby/2.1.0/gems/bosh-registry-1.3202.0/lib/bosh/registry/runner.rb:34:in `start_http_server'
from /var/vcap/packages/registry/gem_home/ruby/2.1.0/gems/thin-1.5.1/lib/thin/server.rb:159:in `start'
from /var/vcap/packages/registry/gem_home/ruby/2.1.0/gems/thin-1.5.1/lib/thin/backends/base.rb:63:in `start'
from /var/vcap/packages/registry/gem_home/ruby/2.1.0/gems/eventmachine-1.0.3/lib/eventmachine.rb:187:in `run'
from /var/vcap/packages/registry/gem_home/ruby/2.1.0/gems/eventmachine-1.0.3/lib/eventmachine.rb:187:in `run_machine'
from /var/vcap/packages/registry/gem_home/ruby/2.1.0/gems/thin-1.5.1/lib/thin/connection.rb:39:in `receive_data'
from /var/vcap/packages/registry/gem_home/ruby/2.1.0/gems/thin-1.5.1/lib/thin/connection.rb:54:in `process'
from /var/vcap/packages/registry/gem_home/ruby/2.1.0/gems/thin-1.5.1/lib/thin/connection.rb:79:in `pre_process'
from /var/vcap/packages/registry/gem_home/ruby/2.1.0/gems/thin-1.5.1/lib/thin/connection.rb:79:in `catch'
from /var/vcap/packages/registry/gem_home/ruby/2.1.0/gems/thin-1.5.1/lib/thin/connection.rb:81:in `block in pre_process'
from /var/vcap/packages/registry/gem_home/ruby/2.1.0/gems/rack-1.6.4/lib/rack/urlmap.rb:50:in `call'
from /var/vcap/packages/registry/gem_home/ruby/2.1.0/gems/rack-1.6.4/lib/rack/urlmap.rb:50:in `each'
from /var/vcap/packages/registry/gem_home/ruby/2.1.0/gems/rack-1.6.4/lib/rack/urlmap.rb:66:in `block in call'
from /var/vcap/packages/registry/gem_home/ruby/2.1.0/gems/sinatra-1.4.6/lib/sinatra/base.rb:2021:in `call'
from /var/vcap/packages/registry/gem_home/ruby/2.1.0/gems/sinatra-1.4.6/lib/sinatra/base.rb:181:in `call'
from /var/vcap/packages/registry/gem_home/ruby/2.1.0/gems/sinatra-1.4.6/lib/sinatra/show_exceptions.rb:21:in `call'
from /var/vcap/packages/registry/gem_home/ruby/2.1.0/gems/rack-1.6.4/lib/rack/head.rb:13:in `call'
from /var/vcap/packages/registry/gem_home/ruby/2.1.0/gems/rack-1.6.4/lib/rack/nulllogger.rb:9:in `call'
from /var/vcap/packages/registry/gem_home/ruby/2.1.0/gems/rack-protection-1.5.3/lib/rack/protection/frame_options.rb:31:in `call'
from /var/vcap/packages/registry/gem_home/ruby/2.1.0/gems/rack-protection-1.5.3/lib/rack/protection/base.rb:49:in `call'
from /var/vcap/packages/registry/gem_home/ruby/2.1.0/gems/rack-protection-1.5.3/lib/rack/protection/base.rb:49:in `call'
from /var/vcap/packages/registry/gem_home/ruby/2.1.0/gems/rack-protection-1.5.3/lib/rack/protection/json_csrf.rb:18:in `call'
from /var/vcap/packages/registry/gem_home/ruby/2.1.0/gems/rack-protection-1.5.3/lib/rack/protection/path_traversal.rb:16:in `call'
from /var/vcap/packages/registry/gem_home/ruby/2.1.0/gems/rack-protection-1.5.3/lib/rack/protection/xss_header.rb:18:in `call'
from /var/vcap/packages/registry/gem_home/ruby/2.1.0/gems/sinatra-1.4.6/lib/sinatra/base.rb:894:in `call'
from /var/vcap/packages/registry/gem_home/ruby/2.1.0/gems/sinatra-1.4.6/lib/sinatra/base.rb:906:in `call!'
from /var/vcap/packages/registry/gem_home/ruby/2.1.0/gems/sinatra-1.4.6/lib/sinatra/base.rb:1066:in `invoke'
from /var/vcap/packages/registry/gem_home/ruby/2.1.0/gems/sinatra-1.4.6/lib/sinatra/base.rb:1066:in `catch'
from /var/vcap/packages/registry/gem_home/ruby/2.1.0/gems/sinatra-1.4.6/lib/sinatra/base.rb:1066:in `block in invoke'
from /var/vcap/packages/registry/gem_home/ruby/2.1.0/gems/sinatra-1.4.6/lib/sinatra/base.rb:906:in `block in call!'
from /var/vcap/packages/registry/gem_home/ruby/2.1.0/gems/sinatra-1.4.6/lib/sinatra/base.rb:1081:in `dispatch!'
from /var/vcap/packages/registry/gem_home/ruby/2.1.0/gems/sinatra-1.4.6/lib/sinatra/base.rb:1066:in `invoke'
from /var/vcap/packages/registry/gem_home/ruby/2.1.0/gems/sinatra-1.4.6/lib/sinatra/base.rb:1066:in `catch'
from /var/vcap/packages/registry/gem_home/ruby/2.1.0/gems/sinatra-1.4.6/lib/sinatra/base.rb:1066:in `block in invoke'
from /var/vcap/packages/registry/gem_home/ruby/2.1.0/gems/sinatra-1.4.6/lib/sinatra/base.rb:1084:in `block in dispatch!'
from /var/vcap/packages/registry/gem_home/ruby/2.1.0/gems/sinatra-1.4.6/lib/sinatra/base.rb:971:in `route!'
from /var/vcap/packages/registry/gem_home/ruby/2.1.0/gems/sinatra-1.4.6/lib/sinatra/base.rb:971:in `each'
from /var/vcap/packages/registry/gem_home/ruby/2.1.0/gems/sinatra-1.4.6/lib/sinatra/base.rb:972:in `block in route!'
from /var/vcap/packages/registry/gem_home/ruby/2.1.0/gems/sinatra-1.4.6/lib/sinatra/base.rb:1012:in `process_route'
from /var/vcap/packages/registry/gem_home/ruby/2.1.0/gems/sinatra-1.4.6/lib/sinatra/base.rb:1012:in `catch'
from /var/vcap/packages/registry/gem_home/ruby/2.1.0/gems/sinatra-1.4.6/lib/sinatra/base.rb:1014:in `block in process_route'
from /var/vcap/packages/registry/gem_home/ruby/2.1.0/gems/sinatra-1.4.6/lib/sinatra/base.rb:974:in `block (2 levels) in route!'
from /var/vcap/packages/registry/gem_home/ruby/2.1.0/gems/sinatra-1.4.6/lib/sinatra/base.rb:993:in `route_eval'
from /var/vcap/packages/registry/gem_home/ruby/2.1.0/gems/sinatra-1.4.6/lib/sinatra/base.rb:974:in `block (3 levels) in route!'
from /var/vcap/packages/registry/gem_home/ruby/2.1.0/gems/sinatra-1.4.6/lib/sinatra/base.rb:974:in `[]'
from /var/vcap/packages/registry/gem_home/ruby/2.1.0/gems/sinatra-1.4.6/lib/sinatra/base.rb:1610:in `block in compile!'
from /var/vcap/packages/registry/gem_home/ruby/2.1.0/gems/sinatra-1.4.6/lib/sinatra/base.rb:1610:in `call'
from /var/vcap/packages/registry/gem_home/ruby/2.1.0/gems/bosh-registry-1.3202.0/lib/bosh/registry/api_controller.rb:22:in `block in <class:ApiController>'
from /var/vcap/packages/registry/gem_home/ruby/2.1.0/gems/bosh-registry-1.3202.0/lib/bosh/registry/instance_manager.rb:29:in `read_settings'
from /var/vcap/packages/registry/gem_home/ruby/2.1.0/gems/bosh-registry-1.3202.0/lib/bosh/registry/instance_manager.rb:54:in `check_instance_ips'
from /var/vcap/packages/registry/gem_home/ruby/2.1.0/gems/bosh-registry-1.3202.0/lib/bosh/registry/instance_manager/openstack.rb:79:in `instance_ips'
from /var/vcap/packages/registry/gem_home/ruby/2.1.0/gems/fog-1.34.0/lib/fog/openstack/models/compute/server.rb:151:in `private_ip_addresses'
from /var/vcap/packages/registry/gem_home/ruby/2.1.0/gems/fog-1.34.0/lib/fog/openstack/models/compute/server.rb:127:in `floating_ip_addresses'
from /var/vcap/packages/registry/gem_home/ruby/2.1.0/gems/fog-1.34.0/lib/fog/openstack/models/compute/server.rb:109:in `all_addresses'
from /var/vcap/packages/registry/gem_home/ruby/2.1.0/gems/fog-1.34.0/lib/fog/openstack/requests/compute/list_all_addresses.rb:6:in `list_all_addresses'
from /var/vcap/packages/registry/gem_home/ruby/2.1.0/gems/fog-1.34.0/lib/fog/openstack/compute.rb:355:in `request'
from /var/vcap/packages/registry/gem_home/ruby/2.1.0/gems/fog-core-1.32.1/lib/fog/core/connection.rb:81:in `request'
from /var/vcap/packages/registry/gem_home/ruby/2.1.0/gems/excon-0.45.4/lib/excon/connection.rb:233:in `request'
from /var/vcap/packages/registry/gem_home/ruby/2.1.0/gems/excon-0.45.4/lib/excon/middlewares/base.rb:15:in `request_call'
from /var/vcap/packages/registry/gem_home/ruby/2.1.0/gems/excon-0.45.4/lib/excon/middlewares/base.rb:15:in `request_call'
from /var/vcap/packages/registry/gem_home/ruby/2.1.0/gems/excon-0.45.4/lib/excon/middlewares/base.rb:15:in `request_call'
from /var/vcap/packages/registry/gem_home/ruby/2.1.0/gems/excon-0.45.4/lib/excon/middlewares/instrumentor.rb:22:in `request_call'
from /var/vcap/packages/registry/gem_home/ruby/2.1.0/gems/excon-0.45.4/lib/excon/middlewares/mock.rb:47:in `request_call'
from /var/vcap/packages/registry/gem_home/ruby/2.1.0/gems/excon-0.45.4/lib/excon/connection.rb:106:in `request_call'
from /var/vcap/packages/registry/gem_home/ruby/2.1.0/gems/excon-0.45.4/lib/excon/connection.rb:387:in `socket'
from /var/vcap/packages/registry/gem_home/ruby/2.1.0/gems/excon-0.45.4/lib/excon/connection.rb:387:in `new'
from /var/vcap/packages/registry/gem_home/ruby/2.1.0/gems/excon-0.45.4/lib/excon/ssl_socket.rb:119:in `initialize'
from /var/vcap/packages/registry/gem_home/ruby/2.1.0/gems/excon-0.45.4/lib/excon/ssl_socket.rb:122:in `rescue in initialize'
from /var/vcap/packages/registry/gem_home/ruby/2.1.0/gems/excon-0.45.4/lib/excon/ssl_socket.rb:122:in `select'
This is an acepted bug in excon 0.45.4 and has been fixed by adding a timeout to IO.select in 0.49.
fog-core is already updated to consume excon 0.49. We're now waiting for fog and fog-openstack to be updated to consume fog-core 1.38.0.
While this prevents excon from blocking forever, it might be a good idea to have a detailed look at the registry on what else could be done to prevent one hanging call blocking the entire registry for everyone.
Updating fog and excon sounds ok.
I've created https://www.pivotaltracker.com/story/show/119363951 to bump fog in bosh-registry.
Fixed with bosh 257.14
|
2025-04-01T06:38:12.859908
| 2020-03-17T22:07:11
|
583327696
|
{
"authors": [
"heyjcollins",
"valeriap"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:4755",
"repo": "cloudfoundry/postgres-release",
"url": "https://github.com/cloudfoundry/postgres-release/issues/57"
}
|
gharchive/issue
|
Please complete the Cloudfoundry Component Log Timestamp Audit - as per: CF-RFC#030
Hi There!
In an effort to assure all CF components use a consistent logging timestamp as per CF-RFC#030, I'm submitting this issue requesting a little action from y'all on this x-component-team effort.
First
Please complete this audit as soon as possible.
this tracker story template includes additional information and tools to aid in completing the audit.
Second
If additional work is required to meet the requirements outlined in CF-RFC#030 please create, and take action to address, github issue(s) describing the work required to meet those requirements.
Thanks so much!
The CF-RFC#030 Authors (Josh Collins and Amelia Downs)
@heyjcollins
I asked the editor access to the audit spreadsheet.
By the way note that for the postgres-release we are only talking about bosh logs (pre-start and monit). PostgreSQL is a third-party app and its log are only configurable through the log_line_prefix parameter.
I've completed the audit and opened a story in PIvotal Tracker to address the requirement.
|
2025-04-01T06:38:12.867607
| 2018-06-04T10:50:26
|
329004527
|
{
"authors": [
"DennisDenuto",
"JoshuaAndrew",
"agentgonzo",
"w7089"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:4756",
"repo": "cloudfoundry/uaa",
"url": "https://github.com/cloudfoundry/uaa/issues/842"
}
|
gharchive/issue
|
Docker/kubernetes deployment of UAA
How are you deploying the UAA?
I am wanting to deploy the UAA
other (please explain)
I want to be able to deploy UAA using docker (for me on kubernetes, but even a docker container would be fine). There are a few third-party images that people have build, but it would be ideal if we could have an official docker image
@agentgonzo
step 1 generate docker image
root@k8s:~/workspace/uaa# export DOCKER_REPO_URL=index.tenxcloud.com
root@k8s:~/workspace/uaa# export DOCKER_REPO_USER=username
root@k8s:~/workspace/uaa# export<EMAIL_ADDRESS>
root@k8s:~/workspace/uaa# ./gradlew -x javadocJar -x sourceJar clean buildImage
BUILD SUCCESSFUL in 52s
38 actionable tasks: 38 executed
Task timings:
974ms :cloudfoundry-identity-model:compileJava
15024ms :cloudfoundry-identity-server:compileJava
706ms :cloudfoundry-identity-statsd:compileJava
960ms :cloudfoundry-identity-statsd:war
7541ms :cloudfoundry-identity-uaa:war
3267ms :cloudfoundry-identity-samples:cloudfoundry-identity-api:war
3361ms :cloudfoundry-identity-samples:cloudfoundry-identity-app:war
15854ms :buildImage
Test timings:
step 2 run uaa in docker
root@k8s:~# docker run -p 8080:8080 --name uaa -d --restart=always \
-v /opt/uaa.yml:/uaa/uaa.yml \
index.tenxcloud.com/username/uaa:4.14.0
@JoshuaAndrew
does it mean that some 3rd party docker image belonging to index.tenxcloud.com is used?
@w7089
DOCKER_REPO_URL can by any docker image repo, It happened that I used index.tenxcloud.com here
But there's no official docker repository of cloud foundry where image of uaa can be downloaded from, correct?
@w7089 The gradle task that builds a docker image (./gradlew -x javadocJar -x sourceJar clean buildImage) was contributed by the community. The intent was for development usage only (it hardcodes private keys etc). However we have discovered that it has been used in the wild for production purposes. This plugin is considered by the team deprecated. We do not test this image in our pipeline, and thus we are not sure if it works. We have a plan to remove this plugin.
To answer your initial question, we do not have an official docker image published to run uaa.
We do have a recommended way to run the uaa which is via its bosh release. bosh is a way to run software in different IaaSes (AWS, GCP, Vsphere, Docker etc). We would recommend to run uaa using bosh targeting docker as your Iaas.
If this approach interests you, then you can look at our uaa acceptance tests that runs uaa inside docker and runs tests against that deployment. This is the bosh manifest that we use in our bosh docker deployment.
A quick way to create a bosh that targets a docker host would be to follow this test script. To target a docker host update https://github.com/cppforlife/bosh-docker-cpi-release/blob/master/tests/run.sh#L56
|
2025-04-01T06:38:12.870749
| 2016-12-05T18:41:50
|
193577334
|
{
"authors": [
"coveralls",
"pradtke",
"sreetummidi"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:4757",
"repo": "cloudfoundry/uaa",
"url": "https://github.com/cloudfoundry/uaa/pull/496"
}
|
gharchive/pull-request
|
Add table of contents to MD files
A lot of the markdown files are quite lengthy. It is nice to have a ToC to get an overview of what is in it. I used doctoc to generate them.
Not sure if this the way the project wants to go, but it would be nice to have auto-generated ToCs.
Coverage decreased (-0.09%) to 84.966% when pulling 6d5aefd871c7d3a78823cf949750bdccd5deacbc on pradtke:doctoc into 7267d5553a633e9948250fc70772ec03e604fa18 on cloudfoundry:develop.
We have plans to consolidate all docs under doc.cloudfoundry.org. This in the in the process of getting reviewed and will be published soon
|
2025-04-01T06:38:12.877658
| 2011-02-22T17:29:33
|
618276
|
{
"authors": [
"elfassy",
"hbi99",
"matthewdl"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:4758",
"repo": "cloudhead/less.js",
"url": "https://github.com/cloudhead/less.js/issues/207"
}
|
gharchive/issue
|
Question: Any way to edit global variables dynamically
Say i have a less file with a bunch of global variables (@variable...) would it be possible for me to change their value using JavaScript (after the page is loaded)?
Thanks!
Hi,
I needed a similar solution described in the beginning of this thread.
For this, I wrote a tiny function that solved it for me.
I forked Less today - check it out:
https://github.com/hbi99/less.js/blob/master/dist/less-1.1.6.js
Worth mentioning; this is a fast solution in order to focus on my primary project - therefore, it can most likely be implemented more gracefully than my version. The function accepts a JSON object and variables declared in that object will override existing variables. Finally, the function does not cause new requests to the server.
Anywho, this is how it can be used:
Sample LESS:
@bgColor: black;
@textColor: yellow;
body {background: @bgColor; color: @textColor;}
From JS:
less.modifyVars({
'@bgColor': 'blue',
'@textColor': 'red'
});
Why not just dynamically create a LESS file with variable overrides and recompile?
@matthewdl - actually, the loaded LESS file(s) are stored in a variable called "session_cache". All "new" less-variables pushed in via "modifyVars" are appended to "session_cache" - thereby treated as a new less file and re-renders new CSS.
@Loda - the answer to your question is yes. See my answer to mathewdl - all variables passed in via "modifyVars" will be evaluated as if they were part of the LESS file, just know that the new vars will be appended at the end of the LESS files.
Example:
less.addVars({'@color': 'darken(#F00, 10%)'});
Finally, I would like to inform about two more things;
Any "@import" lines will be removed from the loaded LESS file(s) when stored in "session_cache". This is a good thing (IMHO) - all LESS files will be concatinated and stored in the variable "session_cache". No need to re-import them.
If one utilizes animations with transforms - using this function might result in - depending on how you utilize transforms, result in bad or good visual effect.
Good effect = if the colors animates, it will result in fading-effect.
Bad effect = if one have animations with movement, scaling, etc, which are triggered "onload" - animations will reset and animate from the start.
|
2025-04-01T06:38:12.893716
| 2023-12-06T11:23:57
|
2028325666
|
{
"authors": [
"PixelCook",
"adimiz1",
"temirfe"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:4759",
"repo": "cloudinary/cloudinary_flutter",
"url": "https://github.com/cloudinary/cloudinary_flutter/issues/27"
}
|
gharchive/issue
|
RangeError (index): Invalid value: Only valid value is 0: 1 on Pixel 5 phone
When I run my app on Google Pixel 5 (API 32) (both device and emulator) the following throws error:
CldImg cldimg = CloudinaryObject.fromCloudName(cloudName: 'somename').image('someId');
print(cldimg); // throws: RangeError (index): Invalid value: Only valid value is 0: 1
On other phones everything works.
flutter doctor:
[✓] Flutter (Channel stable, 3.13.9, on macOS 14.0 23A344 darwin-arm64 (Rosetta), locale en-KG)
• Flutter version 3.13.9 on channel stable at /Users/temair/Programs/flutter
• Upstream repository https://github.com/flutter/flutter.git
• Framework revision d211f42860 (6 weeks ago), 2023-10-25 13:42:25 -0700
• Engine revision 0545f8705d
• Dart version 3.1.5
• DevTools version 2.25.0
[✓] Android toolchain - develop for Android devices (Android SDK version 33.0.1)
• Android SDK at /Users/temair/Library/Android/sdk
• Platform android-33, build-tools 33.0.1
• ANDROID_HOME = /Users/temair/Library/Android/sdk
• Java binary at: /Applications/Android Studio.app/Contents/jbr/Contents/Home/bin/java
• Java version OpenJDK Runtime Environment (build 17.0.6+0-17.0.6b829.9-10027231)
• All Android licenses accepted.
[✓] Xcode - develop for iOS and macOS (Xcode 15.0.1)
• Xcode at /Applications/Xcode.app/Contents/Developer
• Build 15A507
• CocoaPods version 1.14.3
[✓] Chrome - develop for the web
• Chrome at /Applications/Google Chrome.app/Contents/MacOS/Google Chrome
[✓] Android Studio (version 2022.3)
• Android Studio at /Applications/Android Studio.app/Contents
• Flutter plugin can be installed from:
🔨 https://plugins.jetbrains.com/plugin/9212-flutter
• Dart plugin can be installed from:
🔨 https://plugins.jetbrains.com/plugin/6351-dart
• Java version OpenJDK Runtime Environment (build 17.0.6+0-17.0.6b829.9-10027231)
[✓] VS Code (version 1.84.2)
• VS Code at /Applications/Visual Studio Code.app/Contents
• Flutter extension version 3.76.0
[✓] Connected device (2 available)
• macOS (desktop) • macos • darwin-arm64 • macOS 14.0 23A344 darwin-arm64 (Rosetta)
• Chrome (web) • chrome • web-javascript • Google Chrome 119.0.6045.199
! Error: Browsing on the local area network for iPhone. Ensure the device is unlocked and attached with a cable or associated
with the same local area network as this Mac.
The device must be opted into Developer Mode to connect wirelessly. (code -27)
[✓] Network resources
• All expected network resources are available.
• No issues found!
Hi @temirfe
Thank you for reporting the issue, we'll take a look ASAP.
Hi there @temirfe,
Currently, we are unable to replicate your issue. Is there any additional information you could provide that can help us reproduce your issue?
|
2025-04-01T06:38:12.906911
| 2022-10-20T21:15:41
|
1417298788
|
{
"authors": [
"cloudpanel-io",
"mertcangokgoz"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:4760",
"repo": "cloudpanel-io/clp-wp-varnish-cache",
"url": "https://github.com/cloudpanel-io/clp-wp-varnish-cache/pull/1"
}
|
gharchive/pull-request
|
Turkish language and git-updater support
Hello @cloudpanel-io
For those who want to use varnish cache with cloudpanel from Turkey, I translated the plugin into Turkish and I have added the headers requested by the git-updater plugin.
Thanks a lot @mertcangokgoz
|
2025-04-01T06:38:12.932426
| 2024-11-02T20:32:45
|
2630771463
|
{
"authors": [
"golgoth31"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:4761",
"repo": "cloudscalerio/cloudscaler",
"url": "https://github.com/cloudscalerio/cloudscaler/pull/24"
}
|
gharchive/pull-request
|
chore: release main
:robot: I have created a release beep boop
operator: 1.0.0
1.0.0 (2024-11-02)
Features
init (e8be0d4)
Bug Fixes
build (cc58b33)
build (6062759)
build (9d22dff)
build (0304c39)
deps: update all non-major dependencies (c508cb7)
deps: update k8s.io/utils digest to 49e7df5 (c2662a5)
goreleaser (3e1b24e)
release (f2c81ad)
release (6c5efaf)
renovate.json (a566540)
update release (92b145f)
helm: 1.0.0
1.0.0 (2024-11-02)
Features
init (e8be0d4)
This PR was generated with Release Please. See documentation.
:robot: Created releases:
operator-v1.0.0
helm-v1.0.0
:sunflower:
|
2025-04-01T06:38:12.934117
| 2016-12-14T19:30:12
|
195623419
|
{
"authors": [
"bg46z",
"cloudson",
"ryukinix"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:4762",
"repo": "cloudson/gitql",
"url": "https://github.com/cloudson/gitql/pull/31"
}
|
gharchive/pull-request
|
make README a little more workplace-friendly
This is a cool project, and I just wanted to make it a little more immediately shareable.
Hell is a good word.
Hey @bg46z. There is no a big reason to change this words but I'm happy to see your interest here.
If you want to help us with the documentation there are many work related that. Explained for example on issue #45 . Feel free to join us :)
|
2025-04-01T06:38:12.935838
| 2022-05-06T11:53:49
|
1227768398
|
{
"authors": [
"110y",
"zchee"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:4763",
"repo": "cloudspannerecosystem/wrench",
"url": "https://github.com/cloudspannerecosystem/wrench/pull/54"
}
|
gharchive/pull-request
|
all: use spansql for parse DDL
WHAT
all: use spansql for parse DDL
WHY
Can parse include comment schema
@zchee Please fix the tests (now it can't be compiled...) 🙏
🙄
I'll fix and write test
@zchee Let me fix the test cases...
|
2025-04-01T06:38:12.952602
| 2017-01-05T23:37:40
|
199088048
|
{
"authors": [
"RasmusWernerLarsen",
"emayssat-ms",
"lil-cain"
],
"license": "bsd-2-clause",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:4764",
"repo": "cloudtools/troposphere",
"url": "https://github.com/cloudtools/troposphere/issues/638"
}
|
gharchive/issue
|
How to create an OriginAccessIdentity for CloudFront
The only information I have on OAI is the following
class S3Origin(AWSProperty):
props = {
'OriginAccessIdentity': (basestring, False),
}
How can I create on for my CloudFront distribution?
Pointers appreciated.
Hi,
OriginAccessIdentify: (basestring, False) means that a string goes here, and that it's not required. I think you probably need to look at the amazon docs to tell what should go there. Based on the docs here it indicates that you put the CloudFront origin access identity here - so you want something like S3Origin(OriginAccessIdentity=)
It is not possible without using custom resources, it's a well known annoyance of cloudfront + cloudformation.
Here's a project of my own, that contains a Custom Resource for handling just that. Note that the lambda that powers it, is by no means perfect, it won't delete the OAI when it's not used, though adding that is easy, I don't want it to happen automatically.
generate_package_repository.zip
|
2025-04-01T06:38:12.957487
| 2017-02-14T12:22:16
|
207500736
|
{
"authors": [
"emayssat-ms",
"glitchindustries",
"ismaelfernandezscmspain",
"markpeek"
],
"license": "bsd-2-clause",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:4765",
"repo": "cloudtools/troposphere",
"url": "https://github.com/cloudtools/troposphere/issues/663"
}
|
gharchive/issue
|
How I can change the default param from python
How I can pass other value from python to keyname_param?
I Have:
template = Template()
keyname_param = template.add_parameter(Parameter(
"KeyName",
Description="Name of an existing EC2 KeyPair to enable SSH "
"access to the instance",
Default="param",
Type="String",
))
I'm not sure I understand your question. Do you want to add multiple Parameters? There are uses of Parameters in the example directory that might help.
Not,
I want to pass other value to key_param.
I have param as a default with value param
I'm calling my troposphere template like:
python template.py
When I calling a CloudFormation I can pass the vars to set my template.
How can I do this from troposphere?
@ismaelfernandezscmspain You wouldn't necessarily do it from within troposphere, you would pass it as a variable using boto3 or one of the other aws sdk's. Below is how you could do it using Boto3:
cfparams = [] cfparams.append({'ParameterKey' : 'KeyName', 'ParameterValue': <value you want to pass here or variable>, 'UsePreviousValue' : False )
and then in your create_stack function use 'Parameters=cfparams,'
That should do what you're asking.
👍 works it!
thanks!
Troposphere just generate a template FILE. Nothing else! You can change the default value in template generation process or (better) you call your template with a non-default parameter! You can accomplish the latter with boto or from the aws cli, i.e.
aws cloudformation create-stack --stack-name myteststack --template-body file:////home//local//test//sampletemplate.json --parameters ParameterKey=KeyPairName,ParameterValue=TestKey ParameterKey=SubnetIDs,ParameterValue=SubnetID1\,SubnetID2
|
2025-04-01T06:38:12.961289
| 2018-08-22T16:07:58
|
353023437
|
{
"authors": [
"ROTGP",
"cloudwebrtc"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:4766",
"repo": "cloudwebrtc/flutter-webrtc-server",
"url": "https://github.com/cloudwebrtc/flutter-webrtc-server/issues/1"
}
|
gharchive/issue
|
Demo fails
I've followed the instructions to run the demo. After installing, I run "npm start" which seems to succeed, but then fails. Full output is below.
flutter-webrtc-server $ npm install
npm WARN deprecated<EMAIL_ADDRESS>🙌 Thanks for using Babel: we recommend using babel-preset-env now: please read babeljs.io/env to update!
><EMAIL_ADDRESS>install /Users/Sites/webrtc/flutter-webrtc-server/node_modules/fsevents
> node install
[fsevents] Success: "/Users/Sites/webrtc/flutter-webrtc-server/node_modules/fsevents/lib/binding/Release/node-v59-darwin-x64/fse.node" is installed via remote
><EMAIL_ADDRESS>postinstall /Users/Sites/webrtc/flutter-webrtc-server/node_modules/jss
> node -e "console.log('\u001b[35m\u001b[1mLove JSS? You can now support us on open collective:\u001b[22m\u001b[39m\n > \u001b[34mhttps://opencollective.com/jss/donate\u001b[0m')"
Love JSS? You can now support us on open collective:
> https://opencollective.com/jss/donate
npm notice created a lockfile as package-lock.json. You should commit this file.
added 1137 packages in 87.457s
flutter-webrtc-server $ npm start
><EMAIL_ADDRESS>start /Users/Sites/webrtc/flutter-webrtc-server
> npm-run-all --parallel run-server run-webpack-dev-server
><EMAIL_ADDRESS>run-server /Users/Sites/webrtc/flutter-webrtc-server
> node server/index.js
><EMAIL_ADDRESS>run-webpack-dev-server /Users/Sites/webrtc/flutter-webrtc-server
> webpack-dev-server --mode development --https --cert ./certs/cert.pem --key ./certs/key.pem --hot --inline --progress --colors --watch --compress --content-base ./dist --port 8086 --host <IP_ADDRESS>
Start WS Server: bind => ws://<IP_ADDRESS>:4442
Start WSS Server: bind => wss://<IP_ADDRESS>:4443
10% building modules 1/1 modules 0 activeℹ 「wds」: Project is running at https://<IP_ADDRESS>:8086/
ℹ 「wds」: webpack output is served from /
ℹ 「wds」: Content not from webpack is served from /Users/Sites/webrtc/flutter-webrtc-server/dist
✖ 「wdm」: Hash: 20c9379ed53a3fe1c1ae
Version: webpack 4.17.1
Time: 3838ms
Built at: 22/08/2018 18:02:05
1 asset
Entrypoint main = main.20c9379e.bundle.js
[./node_modules/loglevel/lib/loglevel.js] 7.68 KiB {main} [built]
[./node_modules/react-dom/index.js] 1.33 KiB {main} [built]
[./node_modules/react/index.js] 190 bytes {main} [built]
[./node_modules/strip-ansi/index.js] 161 bytes {main} [built]
[./node_modules/url/url.js] 22.8 KiB {main} [built]
[./node_modules/webpack-dev-server/client/index.js?https://<IP_ADDRESS>:8086] (webpack)-dev-server/client?https://<IP_ADDRESS>:8086 7.78 KiB {main} [built]
[./node_modules/webpack-dev-server/client/overlay.js] (webpack)-dev-server/client/overlay.js 3.58 KiB {main} [built]
[./node_modules/webpack/hot sync ^\.\/log$] (webpack)/hot sync nonrecursive ^\.\/log$ 170 bytes {main} [built]
[0] multi (webpack)-dev-server/client?https://<IP_ADDRESS>:8086 (webpack)/hot/dev-server.js ./src/index.js 52 bytes {main} [built]
[./node_modules/webpack/hot/dev-server.js] (webpack)/hot/dev-server.js 1.61 KiB {main} [built]
[./node_modules/webpack/hot/emitter.js] (webpack)/hot/emitter.js 75 bytes {main} [built]
[./node_modules/webpack/hot/log-apply-result.js] (webpack)/hot/log-apply-result.js 1.27 KiB {main} [built]
[./node_modules/webpack/hot/log.js] (webpack)/hot/log.js 1.11 KiB {main} [built]
[./src/App.js] 13.1 KiB {main} [built]
[./src/index.js] 466 bytes {main} [built]
+ 328 hidden modules
ERROR in<EMAIL_ADDRESS>Module not found: Error: Can't resolve '@babel/runtime/helpers/builtin/interopRequireDefault' in '/Users/Sites/webrtc/flutter-webrtc-server/node_modules/@material-ui/icons'
<EMAIL_ADDRESS>3:29-92
@ ./src/App.js
@ ./src/index.js
@ multi (webpack)-dev-server/client?https://<IP_ADDRESS>:8086 (webpack)/hot/dev-server.js ./src/index.js
ERROR in<EMAIL_ADDRESS>Module not found: Error: Can't resolve '@babel/runtime/helpers/builtin/interopRequireDefault' in '/Users/Sites/webrtc/flutter-webrtc-server/node_modules/@material-ui/icons'
<EMAIL_ADDRESS>3:29-92
@ ./src/App.js
@ ./src/index.js
@ multi (webpack)-dev-server/client?https://<IP_ADDRESS>:8086 (webpack)/hot/dev-server.js ./src/index.js
ERROR in<EMAIL_ADDRESS>Module not found: Error: Can't resolve '@babel/runtime/helpers/builtin/interopRequireDefault' in '/Users/Sites/webrtc/flutter-webrtc-server/node_modules/@material-ui/icons'
<EMAIL_ADDRESS>3:29-92
@ ./src/App.js
@ ./src/index.js
@ multi (webpack)-dev-server/client?https://<IP_ADDRESS>:8086 (webpack)/hot/dev-server.js ./src/index.js
ERROR in<EMAIL_ADDRESS>Module not found: Error: Can't resolve '@babel/runtime/helpers/builtin/interopRequireDefault' in '/Users/Sites/webrtc/flutter-webrtc-server/node_modules/@material-ui/icons'
<EMAIL_ADDRESS>3:29-92
@ ./src/App.js
@ ./src/index.js
@ multi (webpack)-dev-server/client?https://<IP_ADDRESS>:8086 (webpack)/hot/dev-server.js ./src/index.js
ERROR in<EMAIL_ADDRESS>Module not found: Error: Can't resolve '@babel/runtime/helpers/builtin/interopRequireDefault' in '/Users/Sites/webrtc/flutter-webrtc-server/node_modules/@material-ui/icons'
<EMAIL_ADDRESS>3:29-92
@ ./src/App.js
@ ./src/index.js
@ multi (webpack)-dev-server/client?https://<IP_ADDRESS>:8086 (webpack)/hot/dev-server.js ./src/index.js
ERROR in<EMAIL_ADDRESS>Module not found: Error: Can't resolve '@babel/runtime/helpers/builtin/interopRequireDefault' in '/Users/Sites/webrtc/flutter-webrtc-server/node_modules/@material-ui/icons'
<EMAIL_ADDRESS>3:29-92
@ ./src/App.js
@ ./src/index.js
@ multi (webpack)-dev-server/client?https://<IP_ADDRESS>:8086 (webpack)/hot/dev-server.js ./src/index.js
ERROR in<EMAIL_ADDRESS>Module not found: Error: Can't resolve '@babel/runtime/helpers/builtin/interopRequireDefault' in '/Users/Sites/webrtc/flutter-webrtc-server/node_modules/@material-ui/icons'
<EMAIL_ADDRESS>3:29-92
@ ./src/App.js
@ ./src/index.js
@ multi (webpack)-dev-server/client?https://<IP_ADDRESS>:8086 (webpack)/hot/dev-server.js ./src/index.js
ERROR in<EMAIL_ADDRESS>Module not found: Error: Can't resolve '@babel/runtime/helpers/builtin/interopRequireDefault' in '/Users/Sites/webrtc/flutter-webrtc-server/node_modules/@material-ui/icons/utils'
<EMAIL_ADDRESS>3:29-92
<EMAIL_ADDRESS> @ ./src/App.js
@ ./src/index.js
@ multi (webpack)-dev-server/client?https://<IP_ADDRESS>:8086 (webpack)/hot/dev-server.js ./src/index.js
Child html-webpack-plugin for "index.html":
1 asset
Entrypoint undefined = ./index.html
[./node_modules/html-webpack-plugin/lib/loader.js!./src/index.html] 370 bytes {0} [built]
[./node_modules/lodash/lodash.js] 527 KiB {0} [built]
[./node_modules/webpack/buildin/global.js] (webpack)/buildin/global.js 489 bytes {0} [built]
[./node_modules/webpack/buildin/module.js] (webpack)/buildin/module.js 497 bytes {0} [built]
ℹ 「wdm」: Failed to compile.
You can try npm i babel-runtime, May lack an npm dependency.
That solved it, although I had to delete node_modules and package-lock file, install babel-runtime, then install as per usual.
|
2025-04-01T06:38:12.963663
| 2022-07-05T20:54:31
|
1294757785
|
{
"authors": [
"FGYFFFF",
"hiqsociety"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:4767",
"repo": "cloudwego/hertz-examples",
"url": "https://github.com/cloudwego/hertz-examples/issues/14"
}
|
gharchive/issue
|
For rendering, any features to do header.tmpl / footer. tmpl etc?
i realised ctx.HTML is much slower than ctx.String... possible to speed this up? 50k req/s vs 15k req/s
For rendering, any features to do header.tmpl / footer. tmpl etc?
for header and footer so to have consistency. just curious how to stack them together. now i'm doing a lot of repeats. how to stack them together?
ctx.HTML(consts.StatusOK, "index.tmpl", utils.H{
"t": "Main website",
"h": "Head website",
"b": "Body website",
})
@hiqsociety Can you use front and back-end separation design?
|
2025-04-01T06:38:12.970672
| 2023-04-10T06:23:54
|
1660309830
|
{
"authors": [
"GuangmingLuo",
"welkeyever"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:4768",
"repo": "cloudwego/hertz",
"url": "https://github.com/cloudwego/hertz/pull/712"
}
|
gharchive/pull-request
|
docs(README): update documentation description and link
What type of PR is this?
docs
Check the PR title.
[X] This PR title match the format: <type>(optional scope): <description>
[X] The description of this PR title is user-oriented and clear enough for others to understand.
(Optional) Translate the PR title into Chinese.
更新 readme 中文档相关的描述和超链接
(Optional) More detail description for this PR(en: English/zh: Chinese).
en:
zh(optional):
Which issue(s) this PR fixes:
Long-term stale status, the context is no longer clear, close it first, and resubmit the pr if necessary.
|
2025-04-01T06:38:13.006779
| 2021-11-19T01:37:14
|
1058036581
|
{
"authors": [
"MinhQuang2021",
"whwang299"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:4769",
"repo": "clovaai/spade",
"url": "https://github.com/clovaai/spade/issues/8"
}
|
gharchive/issue
|
Custom model
I found Layoutlmv2 takes both the image and the context into the pretrain. Why would you use BERT and not Layoutlmv2. I want custom to use layoutlmv2 for encoder, is it possible? Thank you so much
Hi @MinhQuang2021
You may customize this part
https://github.com/clovaai/spade/blob/a85574ceaa00f1878a23754f283aa66bc2daf082/spade/model/model.py#L743-L775
The backbone is based on hugginface transformers thus it shouldn't be difficult to adapt layoutlm.
During the major development , layoutlm wasn't available. Also layoutlm is not pre-trained for Japanese and Indonesian. Thus we initialized the spade encoder with the subset of BERT weights (note that spade encoder ≠ BERT).
Best
Thank you, I really want to apply it on my custom dataset for my language. But creating a dataset with a label like CORD is quite time consuming, so I want to ask you, How many datasets will give good results? Is the standard dataset for this model important? Is this model applicable to unseen data well?
Can you give me some knowledge in EL task I still find it quite confusing in paper to just generate graph and decode it.
I am learning it for my graduation project. Thank you very much
I noticed when set self.hparam.token_lv_boxing == True. parse is parsed differently. Why in Cord_test.yml and Cord_train.yml you both set it to True. Can you explain how it works for me?
why do you set token_lv_boxing= True in CORD , and in FUNSD you set token_lv_boxing=False .
How does token_lv_boxing work? I noticed it changes the parse result when training.
hello, can you help me find the code to apply inferring_method?
- tca_rel_s
- tca_rel_g
Hi @MinhQuang2021
I would say, roughly 1,000 documents are required for the reasonable performance.
Under zero-shot setting, the domain should be similar for reasonable performance. Otherwise, you'll observe a large performance drop.
About the EL task, FUNSD consists of four types of fields: "Header" "Question", "Answer", "ETC".
The EL task aims to connect "Header" → "Question", "Question" → "Answer".
So it is kind of "grouping two fields where each field consist of multiple serialized words".
When token_lv_boxing=True, spade draws directed arrows between tokens. For the noisy OCR data like receipt, this makes a better result. For FUNSD, the task assumes a perfect OCR so I turned it off.
The data augmentation option is automatically turned off during prediction. Sorry for the confusion.
Sorry for being late in reply. Good luck with your project!
|
2025-04-01T06:38:13.026988
| 2017-05-25T12:19:47
|
231324225
|
{
"authors": [
"clue",
"d8ahazard"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:4770",
"repo": "clue/php-mdns-react",
"url": "https://github.com/clue/php-mdns-react/issues/8"
}
|
gharchive/issue
|
No Response when Querying MDNS Cast Devices
Hello,
I'm trying to implement your MDNS library in a project I have that aims to communicate with chromecast devices.
I'm trying to get results for the record _googlecast._tcp.local, but keep getiting a TimeoutException:
Thu, 18 May 2017 14:37:32 -0500 [ DEBUG ] api.php::testReact - Function fired!
Thu, 18 May 2017 14:37:37 -0500 [ ERROR ] api.php::{closure} - Error:
React\Dns\Query\TimeoutException: DNS query for _googlecast._tcp.local timed out in
/volume1/Webroot/Phlex/vendor/clue/mdns-react/src/MulticastExecutor.php:84
Stack trace:
#0 [internal function]: Clue\React\Mdns\MulticastExecutor->Clue\React\Mdns\{closure}
(Object(React\EventLoop\Timer\Timer))
#1 /volume1/Webroot/Phlex/vendor/react/event-loop/src/Timer/Timers.php(90):
call_user_func(Object(Closure), Object(React\EventLoop\Timer\Timer))
#2 /volume1/Webroot/Phlex/vendor/react/event-loop/src/StreamSelectLoop.php(177):
React\EventLoop\Timer\Timers->tick()
#3 /volume1/Webroot/Phlex/api.php(1857): React\EventLoop\StreamSelectLoop->run()
#4 /volume1/Webroot/Phlex/api.php(1732): testReact()
#5 /volume1/Webroot/Phlex/api.php(164): scanDevices()
#6 {main}
I don't see any other glaring errors in my logs that would indicate a systemic error, but I could also be mistaken.
I'm invoking the lookup like so:
function testReact() {
write_log("Function fired!");
$name = '_googlecast._tcp.local';
$loop = React\EventLoop\Factory::create();
$factory = new Clue\React\Mdns\Factory($loop);
$mdns = $factory->createResolver();
$mdns->resolve($name)->then(function ($value) {
write_log("Value: ".$value);
},function ($value) {
write_log("Error: ".$value,"ERROR");
});
$loop->run();
}
I'm running this on Synology with apache 2.4 and php7.
Thanks!
Thanks for your interesting question!
It looks like what you're trying to achieve is actually DNS-SD instead of mDNS, as mentioned in the readme:
This library implements the mDNS protocol as defined in RFC 6762. Note that this protocol is related to, but independent of, DNS-Based Service Discovery (DNS-SD) as defined in RFC 6763.
In other words: mDNS deals with finding the IP of a given hostname via multicast, while DNS-SD can be used to find a list of devices that offer a certain service.
I hope this helps :+1:
|
2025-04-01T06:38:13.041399
| 2022-10-20T04:41:06
|
1415932324
|
{
"authors": [
"dixudx",
"victory460"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:4771",
"repo": "clusternet/clusternet",
"url": "https://github.com/clusternet/clusternet/issues/509"
}
|
gharchive/issue
|
clusternet schduler support SchdulerExtender to extend?
clusternet schduler support SchdulerExtender to extend?
clusternet schduler support SchdulerExtender to extend?
No. And I don't suggest using extender to extend scheduler.
Clusternet scheduler is using scheduling framework, which uses plugins to implement different scheduling phases. And it is easy to hook your out-of-tree plugins.
If want to update the logic code of score plugin , have a way to hot update the plugin but no restart clusternet schduler?
No. For scheduling framework, you need to rebuild scheduler and update it.
Scheduler extender is also not suggested by Kubernetes community.
If want to update the logic code of score plugin , have a way to hot update the plugin but no restart clusternet schduler?
No. For scheduling framework, you need to rebuild scheduler and update it.
Scheduler extender is also not suggested by Kubernetes community.
OK. Thanks a lot.
|
2025-04-01T06:38:13.043519
| 2021-08-30T10:10:27
|
982627340
|
{
"authors": [
"dixudx",
"huxiaoliang"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:4772",
"repo": "clusternet/clusternet",
"url": "https://github.com/clusternet/clusternet/pull/94"
}
|
gharchive/pull-request
|
comment out checking cluster type on registration
What type of PR is this?
kind/bug
What this PR does / why we need it:
Currently there is only one kind of ClusterType, i.e., EdgeCluster. Commenting out checking cluster type to allow more customized cluster type.
Which issue(s) this PR fixes:
Fixes #89
Special notes for your reviewer:
/lgtm
|
2025-04-01T06:38:13.046639
| 2015-10-02T23:17:24
|
109588349
|
{
"authors": [
"AsaAyers",
"akshat1",
"swang"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:4773",
"repo": "clutchski/coffeelint",
"url": "https://github.com/clutchski/coffeelint/issues/511"
}
|
gharchive/issue
|
Empty catch blocks trigger indentation errors for version > 1.11.1
An empty catch block (with or without comments) causes coffeelint to complain about inconsistent indentation.
Sample code
try
console.log 'a tisket'
console.log 'a tasket'
catch err
# We'll eat the exception
console.log 'All done'
<EMAIL_ADDRESS>finds no errors with this code. However, later version (1.12.0, 1.12.1) both give the following error
$ coffeelint sample.coffee
✗ sample.coffee
✗ #4: Line contains inconsistent indentation. Expected 2 got 0.
✗ Lint! » 1 error and 0 warnings in 1 file
Thanks for the fix Shuan Wang :) :+1:
no prob. I just release v1.13.0 with all the fixes.
Please update the changelog too.
|
2025-04-01T06:38:13.049403
| 2021-03-08T20:26:51
|
824922992
|
{
"authors": [
"clux"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:4774",
"repo": "clux/kube-rs",
"url": "https://github.com/clux/kube-rs/pull/454"
}
|
gharchive/pull-request
|
remove legacy runtime
because:
seeing pretty decent adoption (edit: of kube-runtime) across github code search
haven't gotten complaints about new runtime
no need to keep this entirely unmaintained module around to confuse people
some counter-points. There is still matt butcher's blog-post that made it to hackernews which uses it, but that's hopefully passed now?
I have updated all my old blog posts (and even written a new one).
Can hold off until we have written something official about kube-runtime potentially.
What do people think?
i'll trust the thumbs up consensus :-)
|
2025-04-01T06:38:13.097256
| 2024-08-02T13:05:12
|
2444924645
|
{
"authors": [
"cmdruid",
"maxgmer"
],
"license": "CC0-1.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:4775",
"repo": "cmdruid/tapscript",
"url": "https://github.com/cmdruid/tapscript/issues/43"
}
|
gharchive/issue
|
Invalid Schnorr Siganture - Example: Basic spend using key-path
I have a method to send sats from one p2tr address to another one. I followed this example to implement it.
The validation goes through, but when I submit the transaction to mempool.space (or blockstream, same thing), I get the following response:
sendrawtransaction RPC error: {"code":-26,"message":"non-mandatory-script-verify-flag (Invalid Schnorr signature)"}
Here is the code I send sats with (essentially the same as the example, but with many UTXOs in tx input):
https://pastebin.com/hBGXNnJe
Here is the decoded TX that fails to be broadcast:
{
version: 2,
vin: [
{
txid: 'fd13e6e779960e0ba56405928524d1077575568a58b8e123275f11cb8630a816',
vout: 0,
scriptSig: [],
sequence: 'fffffffd',
witness: [Array]
}
],
vout: [
{
value: 1000n,
scriptPubKey: '5120b32ce90751de8b076dc9c8f6d46f967abe6cc7a4e97c452ff17444bb60c5890b'
}
],
locktime: 0
}
I think the problem may be here with txInput.vout:
Signer.taproot.sign(taprootPk, txData, txInput.vout)
The index value should be the index of the input being signed, so like this:
for (let i = 0; i < txData.vin.length; i++) {
txData.vin[i].witness = [
Signer.taproot.sign(taprootPk, txData, i),
];
}
Try that change and let me know if it works.
p.s the n notes a BigInt type. They are a relatively new type in javascript.
@cmdruid thanks, fixed this one! But the error is still there, unfortunately :( something else is also wrong
Where are you getting the keys from?
You may want to try skipping this part:
const [taprootPk] = Tap.getSecKey(pk);
const [taprootPub] = Tap.getPubKey(pub);
This step adds an empty tweak to each key in your key-pair, which obfuscates the keys for privacy.
However it is not necessary and not all wallets bother to do this step, so you may want to try omitting it and signing with the keys you have directly.
@cmdruid I tried skipping tweaking, but no luck, same error.
I generate the keys myself, then tweak them, then give the taproot address to the user.
Then somewhere in the future the user funds the taproot address and I do this sending method (I invoke it with not tweaked private key and the method tweaks the key inside it, so that it has access to the taproot address the user has funded).
Also, so that you don't have to waste time, while trying to guess what's wrong, I created this minimal example with all the method inputs, how I generate the keys etc., before I invoke the method:
https://pastebin.com/qH73NDqC#google_vignette
Do you end up getting the same tweaked key-pair in both cases? Does the tweaked public key, when encoded as an address, match the address of the utxo being spent?
@cmdruid Yes, the same tweaked key-pair in both cases 😢
UTXO address matches as well, yes, I re-verify UTXO data for the address being spent doing this request:
GET https://blockstream.info/testnet/api/address/tb1pxez9u8zkzhgephpus2a3fuz7jujhpeujml32a5d4g57aj9wjdqrqh02yen/utxo
What does Signer.taproot.verify(txData, i) not check during verification? I know UTXOs could be one thing, but those are correct, what else could it miss?
Maybe it will be easier for me to debug, if I at least know what could be wrong.
@cmdruid I also can reproduce this error in keyspend.test.ts.
I just removed the wallet stuff, as I don't have a local node.
Then I put my private key there, my UTXO input data/output data and set the network to 'testnet'.
Then, after running the test (it runs successfully), I took the TX hex and posted it to the blockstream node like this:
curl --request POST
--url https://blockstream.info/testnet/api/tx
--header 'Content-Type: text/plain'
--header 'User-Agent: insomnia/9.3.2'
--data<PHONE_NUMBER>01010449081422de6fda2d995662ab51909a3a45bf422bcf47b40af047c94d467b360000000000fdffffff015203000000000000160014d0829aa329e5716e71b10cb17a6a33df8caf72a3014095df63eff8710b308c2f69b693cf7db731cd320c0789d5fad412a1911d1105e243efbd39d2d74cfd3dd0b9fe6b8e1e5387b92b61364c4d664b609441b984d2a000000000
The pubkey that I am decoding from the address "tb1pxez9u8zkzhgephpus2a3fuz7jujhpeujml32a5d4g57aj9wjdqrqh02yen" is the following:
36445e1c5615d190dc3c82bb14f05e972570e792dfe2aed1b5453dd915d26806
Are you signing with a private key that matches this public key?
@cmdruid
Secret: c8ec44f2fd52560af4aaab3210c104d18cf5904c75288d182124c2905da69102
You can import into Unisat Testnet to see for yourself. Address is derived from public and you'll see the correct address there.
Just try this test and see for yourself:
https://pastebin.com/raw/vhn2YJvt
You'll see all the correct keys in the test, the UTXO used here is also correct. And the test will pass, of course.
But then, use the TX Hex and submit it to blockstream (or mempool.space). The BTC node will reject it.
Submission request:
curl --request POST
--url https://blockstream.info/testnet/api/tx
--header 'Content-Type: text/plain'
--data
@cmdruid okay, I realized my error, incorrect value was inputted into prevout.
Sorry for disturbing :)
no problem. I'm glad that you were able to get it working!
|
2025-04-01T06:38:13.115140
| 2024-01-09T11:43:56
|
2072209223
|
{
"authors": [
"clelange",
"giacomoortona"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:4776",
"repo": "cms-analysis/HiggsAnalysis-CombinedLimit",
"url": "https://github.com/cms-analysis/HiggsAnalysis-CombinedLimit/issues/896"
}
|
gharchive/issue
|
Docker instructions for Mac users
Hi @clelange,
I had a few issues when trying to open any graphic element in the docker standalone version on MacOS. I realised that in order to run properly on Mac, one should first set the DISPLAY variable in the docker run. Would it be possible to update the docker run command to:
docker run --hostname=97e12d678fa2 --user=cmsusr --env=PYTHONPATH=/usr/local/venv/lib::/code/HiggsAnalysis/CombinedLimit/build/lib/python --env=HOME=/home/cmsusr --env=CMSSW_BASE=/code --env=PATH=/usr/local/venv/bin:/usr/local/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/local/bin:/code/HiggsAnalysis/CombinedLimit/build/bin --env=LD_LIBRARY_PATH=/usr/local/venv/lib:/usr/local/venv/lib64:/usr/local/venv/lib:/usr/local/lib:/code/HiggsAnalysis/CombinedLimit/build/lib --env=ROOTSYS=/usr/local/venv --env=CLING_STANDARD_PCH=none --env=USER=cmsusr --env=GEOMETRY=1920x1080 --env=DISPLAY=host.docker.internal:0 --env=CC=/usr/local/venv/bin/gcc --env=CXX=/usr/local/venv/bin/g++ --env=GSL_ROOT_DIR=/usr/local/venv --workdir=/code --restart=no<EMAIL_ADDRESS>--label='maintainer.name=Clemens Lange' --runtime=runc -t -d gitlab-registry.cern.ch/cms-cloud/combine-standalone:latest
Afaict, it should not create problems to windows/linux users (although, I couldn't test it). Maybe it would be safer to maintain 2 different run commands?
Then we should instruct Mac users to also run sudo xhost +loclahost on their machines (see: https://gist.github.com/paul-krohn/e45f96181b1cf5e536325d1bdee6c949)
Hi @giacomoortona -- thanks for reporting this issue. While X windows access is possible, we've observed that it's easier for CMS open data users to use VNC instead. Could you try if the instructions at https://gitlab.cern.ch/cms-cloud/root-vnc/-/blob/master/README.md?ref_type=heads work for you?
I'll look into updating the image but since I'm travelling this and next week it might take a bit longer. It should not break Linux/Windows but might break VNC.
Hi @clelange, Thank you for your reply.
Maybe it's just my very limited docker knowledge, but it seems to me that there is no VNC server coupled to the combine docker image, am I wrong? In any case this is not urgent, it can definitively wait until you come back.
Thank you,
Giacomo
The new (slim) container doesn't include VNC anymore. I can look into adding it though, making the necessary fixes pointed out by @giacomoortona
The new (slim) container doesn't include VNC anymore. I can look into adding it though, making the necessary fixes pointed out by @giacomoortona
Hi @clelange, afaict, adding the Display environment variable + xhost should be working on Mac, KDE and maybe elsewhere. Although for reasons I can't yet understand, sometimes container created with my suggested run command get stuck after "docker start". They do work if one uses "docker exec" instead, but you might want to have a look at my suggestion. Maybe there something silly in my command (I tried figuring it out, but with no success so far).
|
2025-04-01T06:38:13.126076
| 2018-02-12T17:38:26
|
296463427
|
{
"authors": [
"amarini",
"jmduarte",
"nucleosynthesis"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:4777",
"repo": "cms-analysis/HiggsAnalysis-CombinedLimit",
"url": "https://github.com/cms-analysis/HiggsAnalysis-CombinedLimit/pull/454"
}
|
gharchive/pull-request
|
Parametric shapes
fix for RooParamatricShapesBinPdf.
still needs some testing for the correctness of the outcome, but the PR may be useful has starting point for the future.
Idea: pointers are not saved in the Workspace, therefore when loaded again are not valid.
This minimal changes, make it mutable (because evaluate is const), and when it needs to be used, if not properly initialized, initialization is performed.
Hi Andrea,
Thanks for this PR. It would be nice to have some check that it still gives
the same results as you say before merging.
For example, its probably enough just to make a comparison as was done
using a simple RooExponential for the documentation :
https://cms-hcomb.gitbooks.io/combine/content/part2/settinguptheanalysis.html#caveat-on-using-parametric-pdfs-with-binned-datasets
On Mon, Feb 12, 2018 at 5:38 PM, Andrea Marini<EMAIL_ADDRESS>wrote:
fix for RooParamatricShapesBinPdf.
still needs some testing for the correctness of the outcome, but the PR
may be useful has starting point for the future.
Idea: pointers are not saved in the Workspace, therefore when loaded again
are not valid.
This minimal changes, make it mutable (because evaluate is const), and
when it needs to be used, if not properly initialized, initialization is
performed.
You can view, comment on, or merge this pull request online at:
https://github.com/cms-analysis/HiggsAnalysis-CombinedLimit/pull/454
Commit Summary
scope specification
fix pointers in ParametricShape
File Changes
M interface/RooParametricShapeBinPdf.h
https://github.com/cms-analysis/HiggsAnalysis-CombinedLimit/pull/454/files#diff-0
(2)
M src/RooParametricShapeBinPdf.cc
https://github.com/cms-analysis/HiggsAnalysis-CombinedLimit/pull/454/files#diff-1
(2)
M src/SequentialMinimizer.cc
https://github.com/cms-analysis/HiggsAnalysis-CombinedLimit/pull/454/files#diff-2
(2)
Patch Links:
https://github.com/cms-analysis/HiggsAnalysis-
CombinedLimit/pull/454.patch
https://github.com/cms-analysis/HiggsAnalysis-
CombinedLimit/pull/454.diff
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
https://github.com/cms-analysis/HiggsAnalysis-CombinedLimit/pull/454,
or mute the thread
https://github.com/notifications/unsubscribe-auth/ABhK2whgtu_dfPO4dB4al0jatI2_jZK1ks5tUHcTgaJpZM4SCjdu
.
@amarini, I just started a new PR #493, which changes how RooParametricShapeBinPdf computes the integrals internally (uses RooAbsReal::createIntegral(), rather than the RooAbsReal::asTF(), which would not allow parameters to functions themselves).
Can you take a look and make sure this new branch still works for your use case (and if your original issue is still present?)?
Thanks!
Javier
|
2025-04-01T06:38:13.414567
| 2018-05-05T21:15:23
|
320541814
|
{
"authors": [
"bohanjason",
"coveralls",
"dqrs"
],
"license": "apache-2.0",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:4778",
"repo": "cmu-db/peloton",
"url": "https://github.com/cmu-db/peloton/pull/1349"
}
|
gharchive/pull-request
|
[15721] TileGroup Compaction and GC Fixes
Overview of Project
Our project has 3 main aspects:
Enabling the Garbage Collector to free empty tile groups and reclaim their memory (Done)
Fixing several important bugs in the Garbage Collector and Transaction Manager (Done)
Merging sparsely occupied tile groups to save memory (Done, Not Tested)
Status
This PR resolves most of the issues identified in issue #1325. It includes the results of a thorough correctness audit we performed on Peleton's garbage collection system. It includes a whole new test suite for the Transaction-Level Garbage Collector and several important bug fixes to the Garbage Collector and Transaction Manager. It also includes our previous work which enhances the Garbage Collector to free empty TileGroups when all of its tuple slots have been recycled. It adds a class called TileGroupCompactor that performs compaction of tile groups. It also includes a large number of changes necessary to rebase on the latest version of Peloton (Mengran's Catalog changes).
The summary of changes are:
GCManager::RecycleTupleSlot allows unused ItemPointers to be returned without going through the entire Unlink and Reclaim process.
Modified TOTransactionManager to pass tombstones created by deletes to the GCManager.
Modified DataTable's Insert to return the ItemPointer to the GCManager in the case of a failed insert.
Modified DataTable's InsertIntoIndexes to iterate through indexes and remove inserted keys in the event of a failure.
Modified GCManager's Unlink function to clean indexes from garbage created by COMMIT_DELETE, COMMIT_UPDATE, and ABORT_UPDATE.
Added 14 tests to transaction_level_gc_manager_test.cpp to handle more complex GC scenarios. Currently 4 of these tests still fail for polluting indexes with old keys, but we believe this will require more significant changes at the execution layer to resolve. We believe we have resolved all of the tuple-level GC bugs and most of the index bugs. We have disabled the 4 check that fail and will open a new issue describing those scenarios only.
We have disabled some of the old GC tests because they need updating to conform to the new GC behavior.
As my teammates said, Nice Job ! !
Coverage decreased (-77.4%) to 0.0% when pulling 36d546d69b8f20b0f891e3faf296545b393694fd on mbutrovich:gc_fixes into 881a8e6d34296d372593ac9714d6a71a5500f82c on cmu-db:master.
|
2025-04-01T06:38:13.416115
| 2020-07-02T17:58:50
|
650112130
|
{
"authors": [
"gonzalezjo",
"mbutrovich"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:4779",
"repo": "cmu-db/terrier",
"url": "https://github.com/cmu-db/terrier/pull/1004"
}
|
gharchive/pull-request
|
Revert "Explicit instantiation of spdlog template functions (#995)"
This reverts commit 686eb69f. Not sure we want to merge this immediately, but it sounds like #995 didn't really help so this is a PR to revert it if we want.
I didn't see this PR when I reverted that commit in #1026. For what it's worth, #1026 does this as a necessary prerequisite for statically linking spdlog.
Redundant.
|
2025-04-01T06:38:13.448267
| 2021-06-25T20:00:01
|
930466141
|
{
"authors": [
"CNCF-Bot",
"amye"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:4780",
"repo": "cncf/landscape",
"url": "https://github.com/cncf/landscape/pull/2182"
}
|
gharchive/pull-request
|
Adding Skooner to the sandbox
Pre-submission checklist:
Please check each of these after submitting your pull request:
[ ] Are you only including a repo_url if your project is 100% open source? If so, you need to pick the single best GitHub repository for your project, not a GitHub organization.
[ ] Is your project closed source or, if it is open source, does your project have at least 300 GitHub stars?
[ ] Have you picked the single best (existing) category for your project?
[ ] Does it follow the other guidelines from the new entries section?
[ ] Have you added your SVG to hosted_logos and referenced it there?
[ ] Does your logo clearly state the name of the project/product and follow the other logo guidelines?
[ ] Does your project/product name match the text on the logo?
[ ] Have you verified that the Crunchbase data for your organization is correct (including headquarters and LinkedIn)?
[ ] ~15 minutes after opening the pull request, the CNCF-Bot will post the URL for your staging server. Have you confirmed that it looks good to you and then added a comment to the PR saying "LGTM"?
Build failed because of:
No cached entry, and Valve Software (member) has issues with twitter: https://twitter.com/valveoficial, 404 - {"errors":[{"code":34,"message":"Sorry, that page does not exist."}]}
Empty twitter for Rancher Federal (member): https://twitter.com/rancherfederal
No cached entry, and Sosivio (member) has issues with twitter: https://twitter.com/SosivioLtd, 404 - {"errors":[{"code":34,"message":"Sorry, that page does not exist."}]}
Empty twitter for Banzai Cloud (KCSP): https://twitter.com/banzaicloud
Empty twitter for StackStorm: https://twitter.com/Stack_Storm
Empty twitter for WasmEdge Runtime: https://twitter.com/realwasmedge
Skooner has an empty or missing homepage_url
|
2025-04-01T06:38:13.472273
| 2022-12-13T17:59:23
|
2550544943
|
{
"authors": [
"amye",
"bsctl",
"krook",
"nate-double-u",
"oliverbaehler"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:4781",
"repo": "cncf/sandbox",
"url": "https://github.com/cncf/sandbox/issues/166"
}
|
gharchive/issue
|
[SANDBOX PROJECT ONBOARDING] Capsule
Welcome to CNCF Project Onboarding!
This is an issue created to help onboard your project into the CNCF after the TOC has voted to accept your project.
We would like to complete onboarding within one month of acceptance.
From the project side, please ensure that you:
[x] Understand the project proposal process and reqs: https://github.com/cncf/toc/blob/main/process/project_proposals.md#introduction
[x] Understand the services available for your project at CNCF https://www.cncf.io/services-for-projects/
[x] Ensure your project meets the CNCF IP Policy: https://github.com/cncf/foundation/blob/master/charter.md#11-ip-policy
[x] Review the online programs guidelines: https://github.com/cncf/foundation/blob/master/online-programs-guidelines.md
[x] Understand the trademark guidelines: https://www.linuxfoundation.org/en/trademark-usage/
[x] Understand the license allowlist: https://github.com/cncf/foundation/blob/master/allowed-third-party-license-policy.md#approved-licenses-for-allowlist
[x] Is your project working on written, open governance? see https://contribute.cncf.io/maintainers/governance/
[x] Slack: Are your slack channels migrated to the Kubernetes or CNCF Slack? (see https://slack.com/help/articles/217872578-Import-data-from-one-Slack-workspace-to-another for more details)
[x] Is your project in its own separate neutral github organization?
[x] Submitted a Pull request to add your project as a sandbox project to https://landscape.cncf.io
[x] Create maintainer list + add to aggregated https://maintainers.cncf.io list by submitting a PR to it
[x] Have added your project to https://github.com/cncf/contribute
[x] Artwork: Submit a pull request to https://github.com/cncf/artwork with your artwork
[x] Domain: transfer domain to the CNCF - https://jira.linuxfoundation.org/plugins/servlet/theme/portal/2/create/63
Things that CNCF will need from the project:
[x] Provide emails for the maintainers added to https://maintainers.cncf.io in order to get access to the maintainers mailing list and ServiceDesk
[x] Trademarks: transfer any trademark and logo mark assets over to the LF - https://github.com/cncf/foundation/tree/master/agreements has agreements
[x] GitHub: ensure 'thelinuxfoundation' and 'caniszczyk' are added as initial org owners, this helps us make sure we have continuity of GH ownership
[x] GitHub: ensure DCO or CLA are enabled for all GitHub repositories of the project
[x] GitHub: ensure that hat the CNCF Code of Conduct (or your adopted version of it) are explicitly referenced at the project's README on GitHub
[x] Website: ensure LF footer is there and website guidelines followed (if your project doesn't have a dedicated website, please adopt those guidelines to the README file of your project on GitHub).
[x] Website: Analytics transferred to<EMAIL_ADDRESS>[x] CII: Start on a CII best practices badge https://bestpractices.coreinfrastructure.org/en
Things that the CNCF will do or help the project to do:
[x] Devstats: add to devstats https://devstats.cncf.io/
[x] Insights: add to LFX Insights https://insights.v3.lfx.linuxfoundation.org/
[x] Marketing: update relevant intro + slide decks
[x] Events: update CFP + Registration + CFP Area forms
[x] ServiceDesk: confirm maintainers have read https://www.cncf.io/services-for-projects/
[x] CNCF Welcome Email Sent to confirm maintainer list access, welcome email has monthly project sync details
[x] Create space for meetings/events on https://community.cncf.io, e.g., https://community.cncf.io/pravega-community/ - (https://github.com/cncf/communitygroups/blob/main/README.md#cncf-projects)
[x] Adopt a license scanning tool, like FOSSA or Snyk
I'd prefer to do it via ticket as I may not be the one to do the work -- but I'm also happy to open it on your behalf if you've not got access yet (would just need your email address, which I can probably get from @krook)
Just pinged it to you in Slack.
A Google analytics property has been created, the tracking number is G-4YLJ6T1Z8F
I couldn't use the hotmail email address provided as Google refused to use it saying it was "an alternate" for a gmail address, and so I used it instead. Please confirm you got the invite @oliverbaehler.
<!-- Google tag (gtag.js) -->
<script async src="https://www.googletagmanager.com/gtag/js?id=G-4YLJ6T1Z8F"></script>
<script> window.dataLayer = window.dataLayer || []; function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'G-4YLJ6T1Z8F'); </script>
@nate-double-u I have added the GA:
https://github.com/projectcapsule/website/pull/10/files
Can you check if it's tracking?
Havent gotten an invite. tbh i am also confused with these analytics.. :D
I have one mail that<EMAIL_ADDRESS>
@nate-double-u can we mark this final item complete now?
[ ] Website: Analytics transferred to<EMAIL_ADDRESS>
Google Analytics is set up and collecting data now.
@oliverbaehler, I've invited your<EMAIL_ADDRESS>and I've just added your<EMAIL_ADDRESS>account as well. Please let me know if you're unable to access.
@krook, we can check this off now: Website: Analytics transferred to<EMAIL_ADDRESS>
@oliverbaehler one additional question,
The CNCF has the https://projectcapsule.dev/ domain which is hosting a site for Capsule.
But there's also an (earlier?) one that is at https://capsule.clastix.io/
Is it possible to have the clastix.io one just forward to projectcapsule.dev?
@krook we will take care of that.
Excellent. We can follow up on that separately. But with everything else complete for onboarding we can mark this complete 🎉
|
2025-04-01T06:38:13.475888
| 2024-07-31T21:24:51
|
2440978079
|
{
"authors": [
"anu1217"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:4782",
"repo": "cnerg/OpenMCActivationStudy",
"url": "https://github.com/cnerg/OpenMCActivationStudy/pull/14"
}
|
gharchive/pull-request
|
Making changes to ALARA flux file and input
Changed flux values to go from high to low energy in W wall
Increased tolerance to gather data for more nuclides
I noticed the Neutron_Flux.csv file was still going from low to high energy, even though the OpenMC script has it written from high to low. The flux file currently in the repo may be from an older version of the OpenMC script
|
2025-04-01T06:38:13.483874
| 2023-05-22T12:36:21
|
1719603473
|
{
"authors": [
"cnlohr",
"emeb",
"recallmenot"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:4783",
"repo": "cnlohr/ch32v003fun",
"url": "https://github.com/cnlohr/ch32v003fun/pull/110"
}
|
gharchive/pull-request
|
SSD1306 drawImage
I've been working on (image) compression and the drawImage function for the @emeb SSD1306 was a necessary side product.
As the display operates in vertical mode with us looking at it horizontally, still every pixel needs to be set.
I managed a tiny speed increase by eliminating the call to drawPixel but the main benefit of the function is to use black or white as transparency and the ability to draw ontop of the buffer contents using bitmath, think sprites!
Of course the SSD1306_LOG_IMAGE can be stripped away.
It looks good on my end! What do you think @emeb ?
Looks good to me!
|
2025-04-01T06:38:13.535710
| 2017-03-29T16:53:14
|
217940272
|
{
"authors": [
"SerjoA",
"austintoddj",
"reliq"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:4784",
"repo": "cnvs/canvas",
"url": "https://github.com/cnvs/canvas/issues/326"
}
|
gharchive/issue
|
Question: do you have an example on how to add plugins to canvas
hi, do you have some kind of example on how to extend canvas even further
thank you for the work its awesome
Thank you @SerjoA for your kind comments on the project! I don't currently have any examples of plugins, it's been a work-in-progress between a couple developers here. Anything that you have to offer in that area would be a very welcome contribution!
thank you for your kind words, i can try and make a plugin in my spare time, just need some info like what folder to put the plugin, how to connect it to the system, do you have any specific request in mind? also , you can redirect me to someone who is working with this system also and i can ask him some info and then make a plugin
Hello @SerjoA,
There is a base extension class here: https://github.com/cnvs/easel/blob/master/src/Extensions/Extension.php which we adapted from Flarum (https://github.com/flarum/core). Themes are extensions so they therefore extend this base class.
thanks @reliq ill look into it and hope to learn from it
@SerjoA If you're satisfied with the answer provided here, feel free to close the issue out 👍
yes thank you, i am using now caffinated modules for laravel, its awesome
|
2025-04-01T06:38:14.156320
| 2021-09-08T19:13:38
|
991447370
|
{
"authors": [
"ianjevans",
"jseldess"
],
"license": "CC-BY-4.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:4785",
"repo": "cockroachdb/docs",
"url": "https://github.com/cockroachdb/docs/issues/11335"
}
|
gharchive/issue
|
sql: allow references to top-level WITH from apply-join
PR: https://github.com/cockroachdb/cockroach/pull/65550
From release notes:
References to WITH expressions from correlated subqueries are now always supported. [#65550][#65550] {% comment %}doc{% endcomment %}
Our WITH, correlated subqueries , and known limitations docs do not explicitly disallow WITH expressions in correlated subqueries.
Closing as there doesn't seem to be doc impact.
|
2025-04-01T06:38:14.159774
| 2022-01-21T09:15:34
|
1110231984
|
{
"authors": [
"cockroach-teamcity",
"rafiss"
],
"license": "CC-BY-4.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:4786",
"repo": "cockroachdb/docs",
"url": "https://github.com/cockroachdb/docs/issues/12795"
}
|
gharchive/issue
|
sql,server: support SCRAM authentication for SQL sessions
Exalate commented:
https://github.com/cockroachdb/cockroach/pull/74301 --- Release note (security update): The hash method used to encode cleartext passwords before storing them is now configurable, via the new cluster setting server.user_login.password_encryption. Its supported values are crdb-bcrypt and scram-sha-256. The cluster setting only becomes effective and its default value is scram-sha-256 after all cluster nodes have been upgraded. Prior to completion of the upgrade, the cluster behaves as if the cluster setting is fixed to crdb-bcrypt (for backward compatibility) Note that the preferred way to populate password credentials for SQL user accounts is to pre-compute the hash client-side, and pass the precomputed hash via CREATE/ALTER USER/ROLE WITH PASSWORD. This ensures that the server never sees the cleartext password. Release note (security update): The cost of the hashing function for scram-sha-256 is now configurable via the new cluster setting server.user_login.password_hashes.default_cost.scram_sha_256. Its default value is 119680, which corresponds to an approximate password check latency of 50-100ms on modern hardware. This value should be increased over time to reflect improvements to CPU performance: the latency should not become so small that it becomes feasible to bruteforce passwords via repeated login attempts. Future versions of CockroachDB will likely update the default accordingly. Release note (sql change): The session variable password_encryption is now exposed to SQL clients. Note that SQL clients cannot modify its value directly; it is configurable via a cluster setting.
Jira Issue: DOC-2364
covered by https://github.com/cockroachdb/docs/issues/12792
|
2025-04-01T06:38:14.164630
| 2020-12-10T21:41:55
|
761639793
|
{
"authors": [
"ericharmeling",
"florence-crl",
"rmloveland",
"taroface"
],
"license": "CC-BY-4.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:4787",
"repo": "cockroachdb/docs",
"url": "https://github.com/cockroachdb/docs/issues/9171"
}
|
gharchive/issue
|
Duplicate Indexes based on column(s) different from the primary index
This documentation page describes an example where the secondary indexes are based on column(s) the SAME AS THE PRIMARY INDEX. In this case, secondary indexes should be created on ALL OTHER localities, NOT INCLUDING where the primary index is located.
Please add clarification to that page that secondary indexes based on column(s) DIFFERENT FROM THE PRIMARY INDEX should be created on ALL localities INCLUDING where the primary index is located. This case was pointed out in this support ticket: https://cockroachdb.zendesk.com/agent/tickets/6985.
To test both cases, I used these sql files in a multi-region cluster:
CREATE INDEX state_idx_central ON postal.sql.txt
INSERT INTO postal_codes VALUES.sql.txt
@ericharmeling Would this belong to your area? I'm not sure who previously worked on these docs, but I can also see them belonging to my area.
I think Jesse wrote these docs, but they are in the "multi-region" area, which belongs to @rmloveland. Rich, I'm sure this page is on your radar for scheduled multi-region updates?
This usage pattern will be replaced by something much simpler for end users in 21.1, but we will need to fix bugs in this doc for now. I'll assign to myself.
@taroface I hope that's ok that I snagged this. I need to learn more about these old multi-region patterns as part of writing docs for the new ones.
@rmloveland For sure! Thanks for taking it on.
|
2025-04-01T06:38:14.242222
| 2024-04-16T15:17:59
|
2246318213
|
{
"authors": [
"Didayolo"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:4788",
"repo": "codalab/codabench",
"url": "https://github.com/codalab/codabench/issues/1413"
}
|
gharchive/issue
|
Upgrading Python and packages using Poetry
As discussed in #1023 and #1410, we should regularly upgrade Python version and other packages.
Apparently Poetry is an interesting tool to help with resolving package conflicts.
We need to keep in mind that we have several Dockerfiles et requirements.txt files in this project. Putting this in place may be tricky at first but very useful in the future.
Requirements files
https://github.com/codalab/codabench/blob/develop/requirements.txt
https://github.com/codalab/codabench/blob/develop/requirements.dev.txt
https://github.com/codalab/codabench/blob/develop/compute_worker/compute_worker_requirements.txt
Dockerfiles
https://github.com/codalab/codabench/blob/develop/Dockerfile
https://github.com/codalab/codabench/blob/develop/Dockerfile.builder
https://github.com/codalab/codabench/blob/develop/Dockerfile.celery
https://github.com/codalab/codabench/blob/develop/Dockerfile.compute_worker
https://github.com/codalab/codabench/blob/develop/Dockerfile.compute_worker_gpu
https://github.com/codalab/codabench/blob/develop/Dockerfile.flower
https://github.com/codalab/codabench/blob/develop/Dockerfile.rabbitmq
Related branch: https://github.com/codalab/codabench/tree/issue_1413
Solved by #1416
|
2025-04-01T06:38:14.244192
| 2021-02-17T17:55:46
|
810407547
|
{
"authors": [
"codazoda"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:4789",
"repo": "codazoda/neatcss",
"url": "https://github.com/codazoda/neatcss/issues/10"
}
|
gharchive/issue
|
Image Borders as Class
Neat needs a class for image borders.
When images are light on light or dark on dark an image border helps them stand out. In neat.html I used an inline style to add a border. Now that there is a dark mode theme a class should be used instead. That would allow the border to swap back and forth between dark and light mode.
Change neat.html so that it's got a class instead of a style
Add the white border as the default style to neat.css
Add a black border in the dark mode section of neat.css
I've fixed this by adding a class of bordered and removing the styles.
|
2025-04-01T06:38:14.320996
| 2015-04-06T13:15:05
|
66604059
|
{
"authors": [
"mshenfield",
"neolytics"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:4790",
"repo": "code-for-nashville/hrc-employment-diversity-report",
"url": "https://github.com/code-for-nashville/hrc-employment-diversity-report/issues/34"
}
|
gharchive/issue
|
Figure out a metric for measuring changes over time
@mshenfield
Potential metrics...:
Chi Square. Compare the Observed/Expected ratios for Metro as a whole and plot the resulting chisquare value overtime.
Advantages: Statistical backing, appropriate for this type of data, creates a single metric that can be plotted
Disadvantages: May be to sophisticated for average user, how do we handle multiple income levels?
Some metric that relates the number of departments at or near the values predicted by census ?
@neolytics I think Chi Square is a good start. Really the actual Chi Square score is less important than the P value it indicates. We could use ranges of P values to simplify the Chi Square down to Red, Yellow, and Green "Diversity Health" scores. For example, P <= .05 could be Red, meaning it was highly likely that the lack of diversity was not by chance. It would be simple to digest, while still being scientifically honest about the health of the department.
I don't know about multiple income levels in a color scenario. Maybe have three channels in our diagram - one for each income level? Here's a mock of that idea:
Hrm, this is an interesting take on the visualization. I really like the concept. We'd have to have an explanation link somewhere to help with this (but this would be the case anyway). It doesn't show month over month trends however, and the updates would be done quarterly.
@mshenfield Do you know of a convenient JS based library that could do something like this? I know you did work with D3 back when you were working with ngd, but I try to avoid pure D3 like the plague. Very powerful, but really steep learning curve.
@mshenfield
Hey man anyway you could help me push out a visualization and the chi square piece? We need to get this ready and I am just struggling to get to it with the whole NDOCH thing to consider.
@neolytics Definitely - I'm going to try and process the data using Chi-Square into a usable format today and tomorrow and then move on to the visualization.
@mshenfield Awesome dude. You have no idea how much I appreciate your help. Thanks!
We've decided on Chi Square. It's the appropriate statistical analysis for this data.
|
2025-04-01T06:38:14.326108
| 2023-02-28T11:03:29
|
1602824941
|
{
"authors": [
"JWittmeyer",
"SimonDegrafKern"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:4791",
"repo": "code-kern-ai/refinery-gateway",
"url": "https://github.com/code-kern-ai/refinery-gateway/pull/119"
}
|
gharchive/pull-request
|
Admin features new
Gateway https://github.com/code-kern-ai/refinery-gateway/pull/119
Model submodule https://github.com/code-kern-ai/refinery-submodule-model/pull/36
Admin dashboard: https://github.com/code-kern-ai/admin-dashboard/pull/62
Refinery UI: https://github.com/code-kern-ai/refinery-ui/pull/125
Updater https://github.com/code-kern-ai/refinery-updater/pull/28
I'm not super sure how but I think there is still some issue in the logic since I never touched the users without a role assignment yet they still got a timestamp.
Only way I could find to reproduce was.
start fresh
Create org
Assign user to org
switch to refinery
import first project
That said I tried to find it with throwing an exception on the id but nothing happened at that point so maybe some further investigation is required
[ ] resolved
|
2025-04-01T06:38:14.342489
| 2023-07-28T13:55:41
|
1826476960
|
{
"authors": [
"alexbarcelo",
"praveenexaf",
"ricciarellif",
"softfactory1"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:4792",
"repo": "code-specialist/fastapi-keycloak",
"url": "https://github.com/code-specialist/fastapi-keycloak/issues/101"
}
|
gharchive/issue
|
realmRoles Field required
Using the examples provided, I'm unable to get the user's info through the route
http://localhost:8081/user/2b1b34f0-5efb-4cf8-b620-b619fd9b98bc (it's a valid user-id)
due to a pydantic error:
pydantic_core._pydantic_core.ValidationError: 1 validation error for KeycloakUser
realmRoles Field required [type=missing, input_value={'id': '2b1b34f0-5efb-4cf8-b620-b619fd9b98bc'...}, input_type=dict]
For further information visit https://errors.pydantic.dev/2.1/v/missing
This is due to the absence, in the incoming json (Python dict), of the key "realmRoles" which is NOT always returned by the KeyCloak platform.
In your model.py module, class KeycloakUser(BaseModel), the realmRoles field is specified as "Optional" (realmRoles: Optional[List[str]]) but this attribute seems to be ignored by pydantic...
Any suggestion?
Thanks in advance
My requirements.txt:
fastapi==0.100.1
fastapi_keycloak==1.0.10
pydantic==2.2.1
uvicorn==0.23.1
My KeyCloack platform:
docker image of jsboss/keycloak:latest (Server version: 16.1.1) with postgresql 13.0
Hi,
you can apply a workaround as a patch of KeycloakUser.__init__ like the following:
oryg__init__ = KeycloakUser.__init__
def mocked__init__(*args, **kwargs):
kwargs['realmRoles'] = kwargs.get('realmRoles', [])
kwargs['attributes'] = kwargs.get('attributes', {})
oryg__init__(*args, **kwargs)
KeycloakUser.__init__ = mocked__init__
In which file we have to apply this?
You can define it in any file like the following function
def patcher():
from fastapi_keycloak import KeycloakUser
oryg__init__ = KeycloakUser.__init__
def new__init__(*args, **kwargs):
kwargs['realmRoles'] = kwargs.get('realmRoles', [])
kwargs['attributes'] = kwargs.get('attributes', {})
oryg__init__(*args, **kwargs)
KeycloakUser.__init__ = new__init__
And call patcher before any fast API code is executed in some main.py or whatever the main module you defined.
I believe that this is related to #97. But it has been a while, so maybe I misremember the error I got at that point.
|
2025-04-01T06:38:14.355455
| 2021-02-07T20:57:06
|
803056067
|
{
"authors": [
"blafving",
"joshreisner"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:4793",
"repo": "code4recovery/tsml-ui",
"url": "https://github.com/code4recovery/tsml-ui/issues/87"
}
|
gharchive/issue
|
Google Sheets Styled demo not working
Solving this issue will help us figure out whether our integration with Google Sheets will still work.
The styled demo mentioned in the README is not working due to: Uncaught ReferenceError: tsml_react_config is not defined
Try creating an instance of tsml_react_config before meetings src with
I had additional trouble with Google Drive after this debug on my test site, and i will continue to look at that. I will share further info as I learn more if this doesn't do the trick.
Thank you!
Should be working again now! Thanks for letting us know that was out of date. https://react.meetingguide.org/demo.html
|
2025-04-01T06:38:14.362709
| 2022-06-02T10:13:32
|
1257979865
|
{
"authors": [
"hadrysm",
"krzyjel"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:4794",
"repo": "codeandpepper/janush",
"url": "https://github.com/codeandpepper/janush/issues/242"
}
|
gharchive/issue
|
"Failed to compile" error message in the new generated app
Type: Bug
Operating system: MacOS 10.15.7
Affects Version: 1.0.3
Priority: High
Severity: Critical
Preconditions: Generate the new application using janush command
Steps to reproduce:
Use command cd web
Use command npm run start or yarn start
Actual result:
After initial response there is an error message in terminal:
"Failed to compile. 'Router' cannot be used as a JSX component. Its instance type 'BrowserRouter' is not a valid JSX element".
App is opening in default browser with index page of Janush, but no action can be performed on it.
Expected result:
The command finished running application correctly.
In default browser window is opening with index page of Janush and you can perform some action on it (for example can click sign in on the page and it will redirect to the sign in page).
Attachments:
The problem here is the wrong node version. You had node version 14 set and Janush has dependencies in package-lock.json for node v16.
Solution:
Reinstall the application from node v16
|
2025-04-01T06:38:14.384366
| 2016-06-04T01:13:04
|
158479847
|
{
"authors": [
"cyphactor"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:4795",
"repo": "codebreakdown/togls",
"url": "https://github.com/codebreakdown/togls/issues/83"
}
|
gharchive/issue
|
Figure out how to get NullToggle to not cause app to fail when feature target type validation happens on evaluation
Currently the NullToggle has a null feature instance that has a target_type of NONE.
In any of the cases where a NullToggle is returned the evaluations would fail if a target is passed in during evaluation. This is extremely bad, and something that can NOT happen.
We need to figure out a way to prevent this exception from happening in the NullToggle case.
Ideas so far around this are to maybe make a NullFeature that uses the NOT_SET type. This bastardizes the intent for the NOT_SET type but would work with the current logic in the feature target type validation logic. However, if the process made it further say to the type match check it might not.
Another idea would be to add a new TargetType that would have to be considered in all of the locations we care about target types (the feature target type validator, the feature to rule target type checker, etc.). This new type would tell those checkers to basically not worry about matching it to the feature and to not worry about any target passed in during evaluation matching to the feature contract.
At the moment this seems like the best idea.
I created a branch called fix_null_toggle_null_feature_target_type with a failing test in the scenario that this would happen.
I dug into this further and seems that this would only happen if it was unable to find a toggle in all of the drivers in the toggle repository. So, it would have to fail to find it in all of the drivers and lastly the in memory driver.
This is a viable scenario specifically if the user accidentally fat fingers the feature identifier so it doesn't match a defined feature.
This has been merged in. Closing...
|
2025-04-01T06:38:14.387762
| 2021-05-26T22:29:47
|
902983902
|
{
"authors": [
"MaslovD",
"codebude"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:4796",
"repo": "codebude/QRCoder",
"url": "https://github.com/codebude/QRCoder/issues/301"
}
|
gharchive/issue
|
How to remove border if I don't need one?
Type of issue
[ ] Bug
[+] Question (e.g. about handling/usage)
[ ] Request for new feature/improvement
Expected Behavior
Current Behavior
Possible Solution (optional)
Steps to Reproduce (for bugs)
Your Environment
Hi @MaslovD ,
just set the parameter "drawQuietZones" to false. You can read more about the parameters in our wiki: https://github.com/codebude/QRCoder/wiki/Advanced-usage---QR-Code-renderers#21-qrcode-renderer-in-detail
|
2025-04-01T06:38:14.428237
| 2022-07-05T07:45:02
|
1293919233
|
{
"authors": [
"DavertMik",
"VivekLande",
"dyaroman",
"viveklandeWK"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:4797",
"repo": "codeceptjs/CodeceptJS",
"url": "https://github.com/codeceptjs/CodeceptJS/issues/3353"
}
|
gharchive/issue
|
Unable to use BeforeSuite and AfterSuite hooks
What are you trying to achieve?
I am trying to use BeforeSuite and AfterSuite hooks
What do you get instead?
Could not include object Step Definition from ../step_definitions/hooks.js from module 'C:\Projects\Aura-Core\aura-accelerate-seed\step_definitions\hooks.js'
BeforeSuite is not a function
TypeError: BeforeSuite is not a function
at Object. (C:\Projects\Aura-Core\aura-accelerate-seed\step_definitions\hooks.js:4:1)
at Module._compile (node:internal/modules/cjs/loader:1105:14)
at Object.Module._extensions..js (node:internal/modules/cjs/loader:1159:10)
at Module.load (node:internal/modules/cjs/loader:981:32)
at Function.Module._load (node:internal/modules/cjs/loader:822:12)
at Module.require (node:internal/modules/cjs/loader:1005:19)
at require (node:internal/modules/cjs/helpers:102:18)
at loadSupportObject (C:\Projects\Aura-Core\aura-accelerate-seed\node_modules\codeceptjs\lib\container.js:338:17)
at loadGherkinSteps (C:\Projects\Aura-Core\aura-accelerate-seed\node_modules\codeceptjs\lib\container.js:317:7)
at Function.create (C:\Projects\Aura-Core\aura-accelerate-seed\node_modules\codeceptjs\lib\container.js:48:25)
Provide console output if related. Use --verbose mode for more details.
Provide test source code if related
### Details
* CodeceptJS version: 3.3.3
* NodeJS Version: 16.5
* Operating System: Windows
* playwright
* Configuration file:
```js
# paste config here
@viveklandeWK I think you should try to use _beforeSuite and _afterSuite instead of BeforeSuite and AfterSuite according to documentation of hooks
Hi @dyaroman, browser session not available in _beforeSuite(), do we have any way to make it available?
Following documentation before creating an issue is highly recommended
Thanks
|
2025-04-01T06:38:14.430001
| 2020-10-07T23:10:11
|
716908761
|
{
"authors": [
"ofuochi",
"tayormi"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:4798",
"repo": "codeclannigeria/codeclannigeria-backend",
"url": "https://github.com/codeclannigeria/codeclannigeria-backend/pull/189"
}
|
gharchive/pull-request
|
Fix dto
closes #188
Here is an overview of what got changed by this pull request:
Complexity increasing per file
==============================
- src/shared/controllers/base.controller.ts 4
- src/users/users.controller.ts 4
See the complete overview on Codacy
|
2025-04-01T06:38:14.449296
| 2021-11-01T18:25:26
|
1041463831
|
{
"authors": [
"adolfov"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:4799",
"repo": "codecov/uploader",
"url": "https://github.com/codecov/uploader/issues/481"
}
|
gharchive/issue
|
Version 0.1.9 incorrectly detects Heroku CI as provider when using on Travis
Describe the bug
I'm running the uploader from travis and on version 0.1.8 the uploader correctly detected this
[2021-10-27T18:48:16.571Z] ['info'] Detected Travis CI as the CI provider.
Since the update to 0.1.9, the uploader detects Heroku CI without any changes on my side
[2021-11-01T17:56:05.952Z] ['info'] Detected Heroku CI as the CI provider.
To Reproduce
Steps to reproduce the behavior:
Run the uploader on verbose mode from a travis build
See error
Expected behavior
Travis to be detected when running build on travis
Screenshots
N/A
Additional context
Because Heroku is detected, the uploader tries to use the Heroku env variables and request is invalid (missing branch etc)
[2021-11-01T17:56:05.952Z] ['info'] Detected Heroku CI as the CI provider.
[2021-11-01T17:56:05.952Z] ['verbose'] -> Using the following env variables:
[2021-11-01T17:56:05.952Z] ['verbose'] CI: true
[2021-11-01T17:56:05.952Z] ['verbose'] HEROKU_TEST_RUN_BRANCH: undefined
[2021-11-01T17:56:05.952Z] ['verbose'] HEROKU_TEST_RUN_COMMIT_VERSION: undefined
[2021-11-01T17:56:05.952Z] ['verbose'] HEROKU_TEST_RUN_ID: undefined
[2021-11-01T17:56:05.971Z] ['info'] Pinging Codecov: https://codecov.io/upload/v4?package=uploader-0.1.9&token=*******&branch=&build=&build_url=&commit=&job=&pr=&service=heroku&slug=XX%2FXX&name=&tag=&flags=&parent=
[2021-11-01T17:56:05.971Z] ['verbose'] Passed token was 36 characters long
[2021-11-01T17:56:05.971Z] ['verbose'] https://codecov.io/upload/v4?package=uploader-0.1.9&branch=&build=&build_url=&commit=&job=&pr=&service=heroku&slug=XX%2FXX&name=&tag=&flags=&parent=
Content-Type: 'text/plain'
Content-Encoding: 'gzip'
X-Reduced-Redundancy: 'false'
[2021-11-01T17:56:06.100Z] ['error'] Error POSTing to https://codecov.io: 400 Invalid request parameters
[2021-11-01T17:56:06.101Z] ['error'] There was an error running the uploader: Error uploading to https://codecov.io: Error: Bad Request
Looks like this issue is fixed by:
https://github.com/codecov/uploader/pull/485/files#diff-a53210814fae036993f7ffe30dc831d847e6f4526dc59cd216c7ba18a7d9d354R5
|
2025-04-01T06:38:14.452233
| 2017-01-15T00:35:42
|
200839318
|
{
"authors": [
"cduchesne"
],
"license": "apache-2.0",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:4800",
"repo": "codedellemc/libstorage",
"url": "https://github.com/codedellemc/libstorage/issues/389"
}
|
gharchive/issue
|
docker integration driver ignores preemption option during mount command
When attempting to mount a volume via docker integration driver, there is no check in place to allow preemptive mount option to function correctly. This is the result of scrubbing unavailable volumes from the list of volumes before processing attach/mount command.
https://github.com/codedellemc/libstorage/blob/master/drivers/integration/docker/docker.go#L173
Fix looks good
|
2025-04-01T06:38:14.483565
| 2017-04-13T17:16:11
|
221627832
|
{
"authors": [
"NealHumphrey",
"eng1nerd"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:4801",
"repo": "codefordc/housing-insights",
"url": "https://github.com/codefordc/housing-insights/issues/198"
}
|
gharchive/issue
|
Tax assessment data via opendata API
Opendata.dc.gov provides tax assessment data of all properties in DC via an API.
[x] Create a TaxApiConn class based on the MarApiConn class currently in the code. Use this to add raw data files to /data/raw/tax_assessment/opendata/YYYYMMDD when the python/cmd/data.py script is run with the appropriate arguments. Write a demo command to run at the command line to create this file with the appropriate datestamp.
(pull request after step 1; someone else can do part 2 if desired)
[ ] Add this file to the manifest.csv using the instructions on adding a new dataset.
Future: automatically add new files to the pipeline whenever this update script is run, but we wait to do this until we have created an appropriate structure for all API scripts.
Note on integrating this data from<EMAIL_ADDRESS>EXTRACTDATE field says when this was last updated, looks to be roughly monthly.
Hey @eng1nerd have you been able to do any work on this issue? Let me know where things stand - and if you're busy with the new job we can also see if someone's able to take it over!
@NealHumphrey I'm sorry for the delay. I will try to finish it by tomorrow 6pm. If something will not be right or it will take me much longer, then some other person should probably step in.
Latest status:
@eng1nerd 's code has been merged into the codefordc repository under the branch name 198-add-tax-data; should pick up code from there.
Current TaxApiConn class uses the 'GeoService` api. Instead we should use the 'GeoJSON' api url, but swap it out for .csv as noted on the opendata documentation (bottom of page): http://opendata.dc.gov/pages/using-apis . This will resolve the issue that the number of rows is limited in the geojson api (and also make it easier for us to parse since it'll be in the format we want anyways).
|
2025-04-01T06:38:14.486874
| 2018-02-02T04:06:14
|
293767176
|
{
"authors": [
"LaurieLinz",
"dwhite96"
],
"license": "isc",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:4802",
"repo": "codefordenver/Circular",
"url": "https://github.com/codefordenver/Circular/issues/268"
}
|
gharchive/issue
|
Write tests to cover the test cases
Test Cases are here: https://drive.google.com/drive/folders/1obtB1bMOm_DdY7t7jSrrMLE_lVbiKZ7z
Some tests were added with #247
@scottfirestone, I assigned you to this issue since you mentioned you wanted to write some tests. I hope I didn't overstep my bounds :). Like you said, I thought we should make sure we're not working on the same thing. I'll continue on with some landing page integration tests for now. I may look into unit testing with Cypress to learn how that works but will let you know before I start writing them. If you want to take on another page of the app, just respond to this message. Let me know if there's a better way to handle more than one assignee on an issue using Github/Waffle without stepping on each others toes. I'm usually using Github by myself and not on a team.
Thanks for fixing the Travis CI issue! I'm hoping I can get some tests committed this weekend.
|
2025-04-01T06:38:14.494256
| 2019-06-03T19:58:37
|
451662220
|
{
"authors": [
"ThorbenJensen"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:4803",
"repo": "codeformuenster/immoscout",
"url": "https://github.com/codeformuenster/immoscout/issues/8"
}
|
gharchive/issue
|
scraper.py in root folder...
...are we going to use this one in the future?
should we move some parts to immo_scraper/, keep the good, and delete what is deprecated?
@jahnique what is your opinion on this?
Resolved by #14
|
2025-04-01T06:38:14.503157
| 2023-10-15T04:12:13
|
1943699635
|
{
"authors": [
"leekahung",
"milofultz",
"xscottxbrownx"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:4804",
"repo": "codeforpdx/PASS",
"url": "https://github.com/codeforpdx/PASS/pull/458"
}
|
gharchive/pull-request
|
Improve accessible name for images on home page and prop names for HomeSection
This PR:
Improves accessible name for images on home page.
1. Adds context for all logo images
2. Removes unnecessary aria-labels
3. Mark decorative (non-informative images) as such using an empty alt attribute
The files this PR effects:
Components
src/components/Footer/RenderCompanyInfoSection.jsx
src/components/NavBar/NavbarDesktop.jsx
src/components/NavBar/NavbarLoggedOut.jsx
src/components/NavBar/NavbarMobile.jsx
src/pages/Home.jsx
Tests
test/components/NavBar/NavbarDesktop.test.jsx
test/components/NavBar/NavbarLoggedOut.test.jsx
test/components/NavBar/NavbarMobile.test.jsx
test/pages/__snapshots__/Home.test.jsx.snap
Screenshots (if applicable):
Should be no difference visually.
Additional Context (optional):
Following guidance around decorative/informative images from W3. Verified using this extension and DevTools.
@leekahung I believe this is ready for you to approve now - so merging is possible.
Yeah, I'm mostly fine with this. Although I've notice the image alts for HomeSection are all empty now compared to before. Won't it be necessary for accessibility?
https://accessibility.psu.edu/images/imageshtml/
I went with the info about decorative/informative images from W3. It was a judgment call that I decided they were decorative since I'm not sure they contribute to the content. Happy to hear otherwise, it's a tough call.
I went with the info about decorative/informative images from W3 (see example 4 in decorative as that's what I thought it fell under). It was a judgment call that I decided they were decorative since I'm not sure they contribute to the content. Happy to hear otherwise, it's a tough call.
Ah, I see. Well, considering that the section title follows immediately after the images, I think it should be fine even if the image themselves break. Alright, I'll approve this.
Hey @milofultz. Was planning to merge this branch in after resolving a merge conflict for one of the test files.
Unfortunately, the resolution I've attempted to make seemed to have failed the test. I think you'll be able to fix it from your end (sorry for the inconvinence). If the test gets fixed, let me know, I'll have this merged into Development.
Thanks!
@leekahung Should be good to go now 👍
|
2025-04-01T06:38:14.528732
| 2022-12-16T07:53:42
|
1499759517
|
{
"authors": [
"Slamoth",
"kenjis"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:4805",
"repo": "codeigniter4/CodeIgniter4",
"url": "https://github.com/codeigniter4/CodeIgniter4/issues/6982"
}
|
gharchive/issue
|
Bug: [Cookie] validatePrefix function can not understand bool var even if its a boolean var
PHP Version
8.1
CodeIgniter4 Version
4.2.10
CodeIgniter4 Installation Method
Composer (as dependency to an existing project)
Which operating systems have you tested for this bug?
Linux
Which server did you use?
apache
Database
Postgres
What happened?
I have an application with benedmunds IONAuth integrated with it. I can login to the site with no problem.
But when I logout protected function validatePrefix in vendor/codeigniter4/framework/system/Cookie/Cookie.phpthrows an error like below.
TypeError
CodeIgniter\Cookie\Cookie::validatePrefix(): Argument #2 ($secure) must be of type bool, null given, called in /srv/www/htdocs/vendor/codeigniter4/framework/system/Cookie/Cookie.php on line 226
The line 222 on that file sets secure flag for cookie from App.php in config directory with $secure = $options['secure'];
But it seems that validatePrefixfunction can not understand that if the variable is type of bool or not.
I used dd($secure) and the result is
$secure boolean false
but validatePrefix function can not decide !
When I change the line 222 $secure = $options['secure']; to $secure = (bool)$options['secure']; everything works fine.
Steps to Reproduce
Login with IONAuth login page (http://somesite.com/auth/login) and logout with (http://somsite.com/auth/logout)
Expected Output
redirection to login page.
Anything else?
No response
I tested the following code, but cannot reproduce the TypeError.
<?php
namespace App\Controllers;
class Home extends BaseController
{
public function index()
{
helper('cookie');
delete_cookie('remember_code');
}
}
no need @kenjis. got it figured it out. Its because of IonAıth...
|
2025-04-01T06:38:14.531101
| 2021-04-10T16:47:59
|
855109089
|
{
"authors": [
"mostafakhudair",
"paulbalandan"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:4806",
"repo": "codeigniter4/CodeIgniter4",
"url": "https://github.com/codeigniter4/CodeIgniter4/pull/4544"
}
|
gharchive/pull-request
|
Relocate cookie exception
Finish Relocate Cookie Class #4502
@MGatner this PR need to be merged before the next release
@paulbalandan please review
I deleted it, but forgot to save changes ^^
Waiting for tests..
All green
@paulbalandan Thank you
|
2025-04-01T06:38:14.552764
| 2020-07-10T20:49:44
|
655028752
|
{
"authors": [
"codejamninja",
"symbiont-liam-howell"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:4807",
"repo": "codejamninja/sphinx-markdown-builder",
"url": "https://github.com/codejamninja/sphinx-markdown-builder/pull/45"
}
|
gharchive/pull-request
|
Changed the behavior of _refuri2http to return a refid fragment when markdown_http_base is None
HI 👋
I noticed that some RST internal cross-references like :py:func:`my_func`
would get written to markdown like so `my_func()`
instead of the expected [`my_func()`](#my_func)
Looks like when self.markdown_http_base is None, the reference becomes None. Is this the intended behavior? Feel free to ignore/close this PR if so, otherwise, here's a simple fix.
@symbiont-liam-howell thanks
|
2025-04-01T06:38:14.554542
| 2019-09-05T23:27:01
|
490058169
|
{
"authors": [
"kirjs"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:4808",
"repo": "codelab-fun/codelab",
"url": "https://github.com/codelab-fun/codelab/issues/1025"
}
|
gharchive/issue
|
Rethink the last slide and potentially break it up into multiple slides
Right now the last slide of every milestone is very overloaded: https://codelab.fun/angular/create-first-app/end
Someone with design skills should think through the best way of making it easier to comprehend or potentially break it up into multiple slides
This the same as #1069 closing
|
2025-04-01T06:38:14.557540
| 2017-01-04T02:07:20
|
198614993
|
{
"authors": [
"codelust"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:4809",
"repo": "codelust/laravel-skeleton",
"url": "https://github.com/codelust/laravel-skeleton/issues/3"
}
|
gharchive/issue
|
Add User Scaffolding Using Laravel's inbuilt scaffolding
https://laravel.com/docs/5.3/authentication#introduction
Use the scaffolding feature to create the register/login/reset password feature.
Pushed in commit: https://github.com/codelust/laravel-skeleton/commit/ae48d8a1a4a49be6dadf1ed9a6e8b8f56d148c21
|
2025-04-01T06:38:14.576220
| 2015-12-14T19:00:47
|
122107697
|
{
"authors": [
"Gvozd",
"develar",
"jadbox",
"nicksrandall",
"phpnode"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:4810",
"repo": "codemix/babel-plugin-closure-elimination",
"url": "https://github.com/codemix/babel-plugin-closure-elimination/issues/3"
}
|
gharchive/issue
|
Does not work on babel >= 6.0
ERROR in ./common/index.js
Module build failed: TypeError: Transformer is not a function
at build (/Users/nicksrandall/Documents/DomoGithub/AppTeam6/da-webpack/node_modules/babel-plugin-closure-elimination/lib/index.js:153:10)
at Function.memoisePluginContainer (/Users/nicksrandall/Documents/DomoGithub/AppTeam6/da-webpack/node_modules/babel-core/lib/transformation/file/options/option-manager.js:127:13)
at Function.normalisePlugin (/Users/nicksrandall/Documents/DomoGithub/AppTeam6/da-webpack/node_modules/babel-core/lib/transformation/file/options/option-manager.js:161:32)
at /Users/nicksrandall/Documents/DomoGithub/AppTeam6/da-webpack/node_modules/babel-core/lib/transformation/file/options/option-manager.js:197:30
at Array.map (native)
at Function.normalisePlugins (/Users/nicksrandall/Documents/DomoGithub/AppTeam6/da-webpack/node_modules/babel-core/lib/transformation/file/options/option-manager.js:173:20)
at OptionManager.mergeOptions (/Users/nicksrandall/Documents/DomoGithub/AppTeam6/da-webpack/node_modules/babel-core/lib/transformation/file/options/option-manager.js:271:36)
at OptionManager.init (/Users/nicksrandall/Documents/DomoGithub/AppTeam6/da-webpack/node_modules/babel-core/lib/transformation/file/options/option-manager.js:416:10)
at File.initOptions (/Users/nicksrandall/Documents/DomoGithub/AppTeam6/da-webpack/node_modules/babel-core/lib/transformation/file/index.js:190:75)
at new File (/Users/nicksrandall/Documents/DomoGithub/AppTeam6/da-webpack/node_modules/babel-core/lib/transformation/file/index.js:121:22)
at Pipeline.transform (/Users/nicksrandall/Documents/DomoGithub/AppTeam6/da-webpack/node_modules/babel-core/lib/transformation/pipeline.js:42:16)
at transpile (/Users/nicksrandall/Documents/DomoGithub/AppTeam6/da-webpack/node_modules/babel-loader/index.js:14:22)
at /Users/nicksrandall/Documents/DomoGithub/AppTeam6/da-webpack/node_modules/babel-loader/lib/fs-cache.js:140:16
at ReadFileContext.callback (/Users/nicksrandall/Documents/DomoGithub/AppTeam6/da-webpack/node_modules/babel-loader/lib/fs-cache.js:27:23)
at FSReqWrap.readFileAfterOpen [as oncomplete] (fs.js:325:13)
Not yet. I plan to support it, I just haven't had time yet. PRs welcome :)
+1 (wish I had time to fix this)
I've updated the tests and build scripts to use Babel 6 in my branch below. However, I do not know enough about Babel 6 to determine how to replace the use of Transformer in src/index.js as it's no longer part of the entry parameters given to the plugin by Babel. *
https://github.com/jadbox/babel-plugin-closure-elimination
**
http://babeljs.io/blog/2015/10/29/6.0.0/
Babel 5
export default function({ Plugin, types: t }) {
return new Plugin(‘ast-transform’, {
visitor: { … }
});
}
Babel 6
export default function({ types: t }) {
return {
visitor: { … }
};
}
@phpnode Does it mean that you found this plugin useless and don't want to develop/use it anymore?
@develar not at all, the project that I wrote this for is still on babel 5 and I've not had chance to upgrade it yet. I'd like to use this on my babel 6 projects too but there are only so many hours in the day and those are not as performance sensitive - PRs are welcome, otherwise I will get around to this in the coming weeks / months.
@phpnode Thanks for clarification. I am interested because I want to debug lambdas and it is not possible since V8 VM doesn't support column-based breakpoints correctly in all cases (https://bugs.chromium.org/p/v8/issues/detail?id=2825). (I develop JetBrains JS debugger (WebStorm, IDEA and so on)).
Good news everyone!
https://github.com/codemix/babel-plugin-closure-elimination/pull/4
I finish work of @jadbox
awesome work @Gvozd
Fixed in 1.0.0
@Gvozd Thanks so much finishing the work in my branch! I'm excited to see if this improves performance.
Btw, will this work for => functions?
|
2025-04-01T06:38:14.598670
| 2017-06-30T12:51:49
|
239770014
|
{
"authors": [
"AerialMantis",
"Ruyk"
],
"license": "apache-2.0",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:4811",
"repo": "codeplaysoftware/standards-proposals",
"url": "https://github.com/codeplaysoftware/standards-proposals/pull/15"
}
|
gharchive/pull-request
|
Fix #14: Added fill method to the handler
Adds a method to the handler class to fill a memory object with a certain value
Looks good to me :+1:
|
2025-04-01T06:38:14.618947
| 2024-03-06T09:12:36
|
2171007633
|
{
"authors": [
"kylecarbs",
"wf1-brandon-grant"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:4812",
"repo": "coder/envbuilder",
"url": "https://github.com/coder/envbuilder/issues/101"
}
|
gharchive/issue
|
Azure DevOps repo clone with GIT_USERNAME from coder_external_auth
Hi there,
I am trying to clone a repo from a private Azure DevOps repository.
The user has authenticated using OAUTH2 via the Coder external_auth documentation
I am then using the template in the repo and injecting the external auth as a data object in Terraform
data "coder_external_auth" "azure_devops" {
id = "primary-devops"
}
resource "kubernetes_deployment" "workspace" {
metadata {
name = "coder-${data.coder_workspace.me.owner}-${lower(data.coder_workspace.me.name)}"
namespace = var.namespace
labels = {
...
}
}
spec {
replicas = data.coder_workspace.me.start_count
selector {
match_labels = {
"coder.workspace_id" = data.coder_workspace.me.id
}
}
strategy {
type = "Recreate"
}
template {
...
}
spec {
container {
name = "coder-${data.coder_workspace.me.owner}-${lower(data.coder_workspace.me.name)}"
# Find the latest version here:
# https://github.com/coder/envbuilder/tags
image = "ghcr.io/coder/envbuilder:0.2.7"
env {
name = "CODER_AGENT_TOKEN"
value = coder_agent.main.token
}
env {
name = "CODER_AGENT_URL"
value = replace(data.coder_workspace.me.access_url, "/localhost|127\\.0\\.0\\.1/", "host.docker.internal")
}
env {
name = "GIT_URL"
value = data.coder_parameter.repo.value == "custom" ? data.coder_parameter.custom_repo_url.value : data.coder_parameter.repo.value
}
env {
name = "GIT_USERNAME"
value = data.coder_external_auth.azure_devops.access_token
}
env {
name = "INIT_SCRIPT"
value = replace(coder_agent.main.init_script, "/localhost|127\\.0\\.0\\.1/", "host.docker.internal")
}
env {
name = "FALLBACK_IMAGE"
value = "codercom/enterprise-base:ubuntu"
}
volume_mount {
name = "workspaces"
mount_path = "/workspaces"
}
}
volume {
name = "workspaces"
persistent_volume_claim {
claim_name = kubernetes_persistent_volume_claim.workspaces.metadata.0.name
}
}
}
}
}
}
When it get's to checking out the repo, the terraform throws a pretty unhelpful error:
#1: 📦 Cloning<EMAIL_ADDRESS>to /workspaces/devcontainers...
Failed to clone repository: clone<EMAIL_ADDRESS>unexpected client error: unexpected requesting<EMAIL_ADDRESS>status code: 400
Falling back to the default image...
Am I missing something or is there something I can test?
It seems that the git clone stage is adding /git-upload-pack to the end of the URL
Could it be related to this issue on the Git Go repository?
https://github.com/go-git/go-git/issues/64
@wf1-brandon-grant based on: https://learn.microsoft.com/en-us/azure/devops/organizations/accounts/use-personal-access-tokens-to-authenticate?view=azure-devops&tabs=Windows#use-a-pat
It seems the username should be a dummy string, and GIT_PASSWORD should be the token.
Have you tried that?
I have given that a shot with the below:
Same error response I am afraid.
Interestingly, if I open up the workspace (as it falls back to the enterprise container image).
And hop into the directory that was cloned, there is a .git/config that when I use the URL, works as expected and clones the repo
Hi @kylecarbs -
Do you have any thoughts on how we might be able to work around this issue?
Hmm odd that cloning afterwards fails.
I'll look at this today.
@wf1-brandon-grant fixed in the attached PR! I'll do a release post-merge.
@wf1-brandon-grant please let me know if that fixes it or not, it'd be very helpful!
Hey @kylecarbs -
Just gave this a test and it has done the trick. Thank you!
|
2025-04-01T06:38:14.620320
| 2022-09-28T09:55:16
|
1389072582
|
{
"authors": [
"rkdarst"
],
"license": "CC-BY-4.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:4813",
"repo": "coderefinery/documentation",
"url": "https://github.com/coderefinery/documentation/issues/247"
}
|
gharchive/issue
|
The source_suffix[es] configuration is no longer needed
Sphinx extensions can now set this automatically.
source_suffix = ['.rst', '.md'] is no longer needed
|
2025-04-01T06:38:14.621176
| 2020-11-11T10:37:58
|
740660247
|
{
"authors": [
"bast",
"wikfeldt"
],
"license": "CC-BY-4.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:4814",
"repo": "coderefinery/installation",
"url": "https://github.com/coderefinery/installation/issues/144"
}
|
gharchive/issue
|
fix windows instructions for meld
there's a type (ways "linux") and should probably have more explicit steps
This has been fixed in #150.
|
2025-04-01T06:38:14.622639
| 2022-09-27T13:12:39
|
1387742133
|
{
"authors": [
"bast",
"rkdarst"
],
"license": "CC-BY-4.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:4815",
"repo": "coderefinery/reproducible-research",
"url": "https://github.com/coderefinery/reproducible-research/issues/202"
}
|
gharchive/issue
|
Faster explanation of snakemake
For the snakemake episode, I tried to use the strategy "minimal explanation, let people go to exercises and give people the most time to do the exercise (and read more of the text to learn what I didn't say).
But, some feedback was that I should have done a better introduction to Snakemake to show what it actually is. I tried to, but with a goal of five minutes of intro, that requires better planning and maybe re-focused episode text.
My first thought is a rework of the first part of the episode thinking about the above. Not necessarily reducing text but thinking about the order and emphasis.
I have significantly shortened the episode in terms of reading and explaing.
|
2025-04-01T06:38:14.625168
| 2017-07-25T14:30:51
|
245421308
|
{
"authors": [
"celgra",
"piq9117"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:4816",
"repo": "codergvbrownsville/code-rgv-pwa",
"url": "https://github.com/codergvbrownsville/code-rgv-pwa/issues/7"
}
|
gharchive/issue
|
React.PureComponent with out props injection? Looking for guidance.
@piq9117
https://github.com/codergvbrownsville/code-rgv-pwa/blob/master/src/pages/Home/Home.tsx
I was under the impression that PureComponents are simply internally setup with a shallow comparison implementation of shouldComponentUpdate(). Without a constructor passing in props or initializing state, how is it different from a functional component?
Yours truly,
A Concerned Citizen
According to the docs its exactly the same as the React.Component, but it implements shouldComponentUpdate to check for comparison. I used to implement this with react-pure-renderer-utils. However, since react implements it internally now I'll just use that.
|
2025-04-01T06:38:14.655845
| 2021-03-23T15:40:50
|
838859354
|
{
"authors": [
"alexnm",
"nkovacic",
"zehfernandes"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:4817",
"repo": "codesandbox/sandpack",
"url": "https://github.com/codesandbox/sandpack/issues/29"
}
|
gharchive/issue
|
Multiple Previews in the same provider
Allow multiple previews in the same provider. Very powerful if combined with some preview props like viewport or routes.
Also stumbled into the same issue. Would be great if the Preview would support adding starting URL so that each preview could potentially show different routes/pages.
Yes, I've been experimenting with this, but it's a bit on hold while I work on some other things. Unfortunately the original requirements of sandpack were assuming 1 bundler 1 preview for each sandpack instance, hence the architecture needs a bit of an overhaul to make this work, especially if you want to have a dynamic number of previews at runtime
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.