id stringlengths 4 10 | text stringlengths 4 2.14M | source stringclasses 2
values | created timestamp[s]date 2001-05-16 21:05:09 2025-01-01 03:38:30 | added stringdate 2025-04-01 04:05:38 2025-04-01 07:14:06 | metadata dict |
|---|---|---|---|---|---|
1889635659 | 🛑 ojolink is down
In 4c8e823, ojolink (ojolink.fr) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Ojolink is back up in b8ad11a after 26 minutes.
| gharchive/issue | 2023-09-11T04:29:50 | 2025-04-01T04:32:35.086125 | {
"authors": [
"GiuseppeFilingeri"
],
"repo": "GiuseppeFilingeri/upgraded-symmetrical-waddle",
"url": "https://github.com/GiuseppeFilingeri/upgraded-symmetrical-waddle/issues/70185",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2039521269 | 🛑 hcr is down
In d9c7319, hcr (hcr.co.uk) was down:
HTTP code: 429
Response time: 587 ms
Resolved: hcr is back up in 77d6dc8 after .
| gharchive/issue | 2023-12-13T11:33:25 | 2025-04-01T04:32:35.088378 | {
"authors": [
"GiuseppeFilingeri"
],
"repo": "GiuseppeFilingeri/upgraded-symmetrical-waddle",
"url": "https://github.com/GiuseppeFilingeri/upgraded-symmetrical-waddle/issues/74194",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2047817776 | 🛑 simerini is down
In 1e87e84, simerini (simerini.com) was down:
HTTP code: 0
Response time: 0 ms
Resolved: simerini is back up in 6b8cc09 after 17 minutes.
| gharchive/issue | 2023-12-19T02:48:09 | 2025-04-01T04:32:35.090856 | {
"authors": [
"GiuseppeFilingeri"
],
"repo": "GiuseppeFilingeri/upgraded-symmetrical-waddle",
"url": "https://github.com/GiuseppeFilingeri/upgraded-symmetrical-waddle/issues/74658",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2094107502 | 🛑 feest-start is down
In 4f774ff, feest-start (feest-start.nl) was down:
HTTP code: 0
Response time: 0 ms
Resolved: feest-start is back up in 65829e8 after 39 minutes.
| gharchive/issue | 2024-01-22T14:57:58 | 2025-04-01T04:32:35.093376 | {
"authors": [
"GiuseppeFilingeri"
],
"repo": "GiuseppeFilingeri/upgraded-symmetrical-waddle",
"url": "https://github.com/GiuseppeFilingeri/upgraded-symmetrical-waddle/issues/75528",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2238127701 | 🛑 ojolink is down
In 0757f9a, ojolink (ojolink.fr) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Ojolink is back up in 36336ed after 8 minutes.
| gharchive/issue | 2024-04-11T16:34:28 | 2025-04-01T04:32:35.095651 | {
"authors": [
"GiuseppeFilingeri"
],
"repo": "GiuseppeFilingeri/upgraded-symmetrical-waddle",
"url": "https://github.com/GiuseppeFilingeri/upgraded-symmetrical-waddle/issues/78872",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2311512677 | 🛑 ojolink is down
In a6f63ee, ojolink (ojolink.fr) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Ojolink is back up in 5d3070a after 27 minutes.
| gharchive/issue | 2024-05-22T21:26:56 | 2025-04-01T04:32:35.097944 | {
"authors": [
"GiuseppeFilingeri"
],
"repo": "GiuseppeFilingeri/upgraded-symmetrical-waddle",
"url": "https://github.com/GiuseppeFilingeri/upgraded-symmetrical-waddle/issues/83878",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2335634160 | 🛑 ojolink is down
In a7b5ccb, ojolink (ojolink.fr) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Ojolink is back up in bb34419 after 8 minutes.
| gharchive/issue | 2024-06-05T11:22:06 | 2025-04-01T04:32:35.100209 | {
"authors": [
"GiuseppeFilingeri"
],
"repo": "GiuseppeFilingeri/upgraded-symmetrical-waddle",
"url": "https://github.com/GiuseppeFilingeri/upgraded-symmetrical-waddle/issues/85514",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2338792981 | 🛑 rapishare is down
In e966258, rapishare (rapishare.com) was down:
HTTP code: 0
Response time: 0 ms
Resolved: rapishare is back up in 836e857 after 16 minutes.
| gharchive/issue | 2024-06-06T17:19:23 | 2025-04-01T04:32:35.102493 | {
"authors": [
"GiuseppeFilingeri"
],
"repo": "GiuseppeFilingeri/upgraded-symmetrical-waddle",
"url": "https://github.com/GiuseppeFilingeri/upgraded-symmetrical-waddle/issues/85662",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2344138284 | 🛑 ojolink is down
In 86cf36a, ojolink (ojolink.fr) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Ojolink is back up in 4f5f559 after 9 minutes.
| gharchive/issue | 2024-06-10T14:56:05 | 2025-04-01T04:32:35.105004 | {
"authors": [
"GiuseppeFilingeri"
],
"repo": "GiuseppeFilingeri/upgraded-symmetrical-waddle",
"url": "https://github.com/GiuseppeFilingeri/upgraded-symmetrical-waddle/issues/86147",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2360804607 | 🛑 orkut is down
In 0167ccf, orkut (orkut.co.in) was down:
HTTP code: 0
Response time: 0 ms
Resolved: orkut is back up in 99e0059 after 43 minutes.
| gharchive/issue | 2024-06-18T21:56:54 | 2025-04-01T04:32:35.107405 | {
"authors": [
"GiuseppeFilingeri"
],
"repo": "GiuseppeFilingeri/upgraded-symmetrical-waddle",
"url": "https://github.com/GiuseppeFilingeri/upgraded-symmetrical-waddle/issues/87397",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2381008884 | 🛑 ojolink is down
In 528879d, ojolink (ojolink.fr) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Ojolink is back up in 9d8543f after 13 minutes.
| gharchive/issue | 2024-06-28T19:03:33 | 2025-04-01T04:32:35.109703 | {
"authors": [
"GiuseppeFilingeri"
],
"repo": "GiuseppeFilingeri/upgraded-symmetrical-waddle",
"url": "https://github.com/GiuseppeFilingeri/upgraded-symmetrical-waddle/issues/89360",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2415711623 | 🛑 rapishare is down
In d202c3a, rapishare (rapishare.com) was down:
HTTP code: 0
Response time: 0 ms
Resolved: rapishare is back up in 667dae7 after 16 minutes.
| gharchive/issue | 2024-07-18T08:22:18 | 2025-04-01T04:32:35.111984 | {
"authors": [
"GiuseppeFilingeri"
],
"repo": "GiuseppeFilingeri/upgraded-symmetrical-waddle",
"url": "https://github.com/GiuseppeFilingeri/upgraded-symmetrical-waddle/issues/93329",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2418524056 | 🛑 orkut is down
In a4d8049, orkut (orkut.co.in) was down:
HTTP code: 0
Response time: 0 ms
Resolved: orkut is back up in 3c9346c after 8 minutes.
| gharchive/issue | 2024-07-19T09:57:28 | 2025-04-01T04:32:35.114294 | {
"authors": [
"GiuseppeFilingeri"
],
"repo": "GiuseppeFilingeri/upgraded-symmetrical-waddle",
"url": "https://github.com/GiuseppeFilingeri/upgraded-symmetrical-waddle/issues/93550",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2421405744 | 🛑 ojolink is down
In 54b3729, ojolink (ojolink.fr) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Ojolink is back up in 8489343 after 8 minutes.
| gharchive/issue | 2024-07-21T13:27:12 | 2025-04-01T04:32:35.116803 | {
"authors": [
"GiuseppeFilingeri"
],
"repo": "GiuseppeFilingeri/upgraded-symmetrical-waddle",
"url": "https://github.com/GiuseppeFilingeri/upgraded-symmetrical-waddle/issues/93986",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2432019544 | 🛑 torrentzap is down
In c6b2e4b, torrentzap (torrentzap.com) was down:
HTTP code: 0
Response time: 0 ms
Resolved: torrentzap is back up in 02bb9d6 after 29 minutes.
| gharchive/issue | 2024-07-26T11:19:28 | 2025-04-01T04:32:35.119087 | {
"authors": [
"GiuseppeFilingeri"
],
"repo": "GiuseppeFilingeri/upgraded-symmetrical-waddle",
"url": "https://github.com/GiuseppeFilingeri/upgraded-symmetrical-waddle/issues/94718",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2446427019 | 🛑 orkut is down
In 177beab, orkut (orkut.co.in) was down:
HTTP code: 0
Response time: 0 ms
Resolved: orkut is back up in bb1e2ff after 9 minutes.
| gharchive/issue | 2024-08-03T15:47:00 | 2025-04-01T04:32:35.121388 | {
"authors": [
"GiuseppeFilingeri"
],
"repo": "GiuseppeFilingeri/upgraded-symmetrical-waddle",
"url": "https://github.com/GiuseppeFilingeri/upgraded-symmetrical-waddle/issues/95864",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1399233597 | DEM.to_vref fails if CRS does not have an EPSG code
Due to this line that export self's CRS using the EPSG code, some functionalities do not work in case the CRS does not have an EPSG code, like DEM.to_vref.
To reproduce, need to create a custom CRS:
import xdem
import rasterio as rio
dem = xdem.DEM(xdem.examples.get_path("longyearbyen_ref_dem"))
dem.set_vref("EGM96")
dem.to_vref("WGS84") # -> works fine
# Create DEM with custom CRS
dst_crs = rio.crs.CRS.from_proj4('+proj=aea +lat_0=75 +lon_0=15 +lat_1=70 +lat_2=80 +x_0=0 +y_0=0 +datum=WGS84 +units=m +no_defs=True')
dst_crs.to_epsg() # -> None
dem_reproj = dem.reproject(dst_crs=dst_crs)
dem_reproj.set_vref("EGM96")
dem_reproj.to_vref("WGS84") # -> raises Error below
raises
File ~/development/GlacioHack/xdem/xdem/dem.py:161, in DEM.ccrs(self)
--> 161 self._ccrs = pyproj.Proj(init="EPSG:" + str(int(crs.to_epsg())), geoidgrids=self.vref_grid).crs
TypeError: int() argument must be a string, a bytes-like object or a number, not 'NoneType'
A first fix is to catch this error and add a NotImplementedError.
Just popping quickly to say that this might be a duplicate of or related to #262, in case you decide to work on it shortly
Just popping quickly to say that this might be a duplicate of or related to #262, in case you decide to work on it shortly
Haha, I really have a short memory !! 😆 What's even funnier is that I wrote almost exactly the same MWE. I'll close this one then.
Duplicate of https://github.com/GlacioHack/xdem/issues/262.
| gharchive/issue | 2022-10-06T09:45:46 | 2025-04-01T04:32:35.141249 | {
"authors": [
"adehecq",
"rhugonnet"
],
"repo": "GlacioHack/xdem",
"url": "https://github.com/GlacioHack/xdem/issues/313",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
259941541 | Editer les paramètres - scénarii
Donner la possibilité d'éditer les paramètres (logement, zone, utilisateur, paramètres utilisateurs) et scénarios.
Ce serait pratique, plutôt que supprimer et recréer.
same as this #144 for the scenarios and +1 for the request
| gharchive/issue | 2017-09-22T21:24:34 | 2025-04-01T04:32:35.147665 | {
"authors": [
"nicoaugereau",
"romain-web"
],
"repo": "GladysProject/Gladys",
"url": "https://github.com/GladysProject/Gladys/issues/229",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1359066420 | Update sbt-scalajs-bundler to 0.21.0
Updates ch.epfl.scala:sbt-scalajs-bundler from 0.20.0 to 0.21.0.
GitHub Release Notes - Version Diff
I'll automatically update this PR to resolve conflicts as long as you don't change it yourself.
If you'd like to skip this version, you can just close this PR. If you have any feedback, just mention me in the comments below.
Configure Scala Steward for your repository with a .scala-steward.conf file.
Have a fantastic day writing Scala!
Adjust future updates
Add this to your .scala-steward.conf file to ignore future updates of this dependency:
updates.ignore = [ { groupId = "ch.epfl.scala", artifactId = "sbt-scalajs-bundler" } ]
Or, add this to slow down future updates of this dependency:
dependencyOverrides = [{
pullRequests = { frequency = "@monthly" },
dependency = { groupId = "ch.epfl.scala", artifactId = "sbt-scalajs-bundler" }
}]
labels: sbt-plugin-update, early-semver-major, semver-spec-minor, commit-count:1
Superseded by #210.
| gharchive/pull-request | 2022-09-01T15:51:23 | 2025-04-01T04:32:35.151544 | {
"authors": [
"scala-steward"
],
"repo": "GlasslabGames/html.scala",
"url": "https://github.com/GlasslabGames/html.scala/pull/200",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
846637753 | ImmutableSeq#sameElements(ImmutableSeq<?>, boolean)
Request an optimization.
Doesn't make much sense
| gharchive/issue | 2021-03-31T13:10:00 | 2025-04-01T04:32:35.152453 | {
"authors": [
"ice1000"
],
"repo": "Glavo/kala-common",
"url": "https://github.com/Glavo/kala-common/issues/30",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1788728928 | Draft: fix/react-tooltip-prod
fixes #99
this is still in draft not working yet just wanted to test out vercel deployment :P
fixed! it was an issue with Next 13's SWC bundler + react-tooltip version
we need separate tooltips for the modal and the root app so I kept those as is, just bumped up the minor version of next, @gglucass wanna give a quick smoke test?
| gharchive/pull-request | 2023-07-05T04:15:01 | 2025-04-01T04:32:35.176710 | {
"authors": [
"Syncretik"
],
"repo": "Glo-Foundation/glo-wallet",
"url": "https://github.com/Glo-Foundation/glo-wallet/pull/175",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1795078445 | PinePhone (Disfunctional Features)
I recently installed GloDroid (LineageOS/Android 13) to my PinePhone and have run into some features that are not working.
Automatic brightness
Reading external storage
- Camera
For the automatic brightness, I believe my model is capable of this and should have the sensor for it, I can even find the option for it in the settings. However, I can only see it in search options and if I select it nothing happens.
External storage is my largest problem. No matter what brand or format of card I use, nothing shows up in stock or third-party storage apps or settings. I even cleaned a card and only made it's primary partition to see if I would be prompted to format it but there is no such option for me. What strikes me most about this is that I sued a microSD to flash the OS to EMMC successfully...
As far as the camera goes, I could live without it, but I would prefer not to and I know it is functional. I do however expect this issue to be the least clear on how to resolve. I don't know if it's a settings issue or a missing package?
I'm curious why you used strikeout on the camera issues, were you able to get the camera working?
I've tested several of both the eMMC and sdcard releases, confirming both front and back camera dip switches were on and the camera app has always crashed before launch.
I just tried the latest LineageOS/Android 13 sdcard release 2023w45 and i'm having the same issue.
| gharchive/issue | 2023-07-08T19:36:34 | 2025-04-01T04:32:35.180551 | {
"authors": [
"MrMendelli",
"ulfnic"
],
"repo": "GloDroidCommunity/pine64-pinephone",
"url": "https://github.com/GloDroidCommunity/pine64-pinephone/issues/24",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1874037889 | Incorrect calculation of pixel resolution
https://github.com/Global-Water-Security-Center/data-exploration/blob/7407c118762f285b991716e52868591e0fffbeed/netcdf_to_geotiff.py#L63
This is incorrect for two reasons:
It suffers from the fencepost problem.
I have seen situations where data gets produced such that the first and last longitudes end up the same, or overlapping, or even have the discontinuity in the middle of the grid! Such a dataset would produce the wrong result even if you correct for the fencepost problem.
I propose a nifty solution that has served me well over the years:
np.median(np.diff(coord_array))
It isn't perfect by any stretch, and there might be better approaches, but it has worked decently for me. Another thing to consider is using rioxarray to load up the netcdf and try to autodetect the spatial array, but my success with its autodetection of underspec'ed datasets is mixed.
neat! thanks for the tip!
| gharchive/issue | 2023-08-30T16:44:51 | 2025-04-01T04:32:35.184387 | {
"authors": [
"WeatherGod",
"richpsharp"
],
"repo": "Global-Water-Security-Center/data-exploration",
"url": "https://github.com/Global-Water-Security-Center/data-exploration/issues/1",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
435431979 | params on GA trackTiming only work with this for some reason
I tried both ways of sending params. this was the only way i could get it to work.
Is this still something that is relevant to the project? If not this PR will be closed.
| gharchive/pull-request | 2019-04-20T19:36:27 | 2025-04-01T04:32:35.198049 | {
"authors": [
"markballenger",
"saifbechan"
],
"repo": "Glovo/vue-multianalytics",
"url": "https://github.com/Glovo/vue-multianalytics/pull/57",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
83129465 | Huge memory leaks
There are two huge memory leaks in Glowstone++ right now.
I don't know what causes them yet, but the objects/classes created are:
io.netty.util.Recycler$WeakOrderQueue
io.netty.util.Recycler$WeakOrderQueue$Link
java.lang.ref.Finalizer
Glowstone does have the java.lang.ref.Finalizer leak. I should mention that this leak does not grow nearly as fast as the netty leak.
Information on memory leaks with Finalizer.
I haven't been able to find much about the netty stuff, but here's the class file.
This is not a problem directly caused by Glowstone++. We could be using a library in a wrong manner.
The memory usage climbs fairly linearly. I've seen the eden space grow to 1GB sometimes, from around 7MB.
It only happens while a player is on the server.
This is really out of our control, we can't do anything about it except use optimized start scripts.
Hi,
Have you found any solution for this leak?
I think I'm having the same issue because I'm getting this error:
DEBUG[cb-io-1-1] c.c.c.c.e.AbstractGenericHandler: Channel Active.
sun.rmi.transport.tcp.TCPTransport$AcceptLoop executeAcceptLoop
WARNING: RMI TCP Accept-0: accept loop for ServerSocket[addr=0.0.0.0/0.0.0.0,port=0,localport=52557] throws
java.lang.OutOfMemoryError: unable to create new native thread
at java.lang.Thread.start0(Native Method)
at java.lang.Thread.start(Thread.java:691)
at java.util.concurrent.ThreadPoolExecutor.addWorker(ThreadPoolExecutor.java:943)
at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1336)
at sun.rmi.transport.tcp.TCPTransport$AcceptLoop.executeAcceptLoop(TCPTransport.java:402)
at sun.rmi.transport.tcp.TCPTransport$AcceptLoop.run(TCPTransport.java:359)
at java.lang.Thread.run(Thread.java:722)
And in the heap dumps I create I can see io.'netty.util.Recycler$WeakOrderQueue$Link' with retained size of 100%
I found this bug, but I'm not sure it's relevant to my problem:
https://github.com/netty/netty/issues/3166
I have fixed some memory leaks related to not releasing some ByteBufs.
| gharchive/issue | 2015-05-31T16:34:01 | 2025-04-01T04:32:35.204364 | {
"authors": [
"mastercoms",
"maytal-shamir"
],
"repo": "GlowstonePlusPlus/GlowstonePlusPlus",
"url": "https://github.com/GlowstonePlusPlus/GlowstonePlusPlus/issues/71",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
514947796 | System diagram procedures are currently included in the code coverage results
The system diagram procedures are currently included in the code coverage results.
These shoudn't be included as they are system procedures so we shouldn't need to write tests for them, and it negatively impacts the total coverage %.
I believe they can be recognised by either their name (sp_%diagram) or by the extended property microsoft_database_tools_support.
I am currently using Code Coverage as part of the RedGate SQLTest plugin.
I had approached RedGate about this and they pointed me to here :)
https://forum.red-gate.com/discussion/85304/how-can-i-exclude-the-system-stored-procedures-for-diagrams-from-code-coverage-results
@GoEddie
Hi @davidlyes,
SQLCover already has the ability to take a filter to exclude specific objects ("sp_.*diagram" would work for you) but there is nothing in the SQLTest ui to allow you to specify anything that can be passed to SQLCover.
I am a bit hesitant to add in specific filters because although you don't want to see these someone else might want to see them.
A half way measure might be to get SQLCover to look in a config file in a known place for additional filters, that way it will work with SQLTest and anywhere else.
So your options are:
Don't use the Redgate UI to generate your coverage reports
Ask Redgate to include the ability to add a filter as part of their UI
Wait for me to implement an additional config file for extra filters - to make sure people didn't get additional filters they didn't know about I would want these displayed on the output somehow so there are a few things to do here. (you would also need Redgate to use the updated version of the dll)
ed
Thanks for your quick feedback @GoEddie!
I will go back to RedGate to see if they would be able to implement filters into their UI, since it is something that is already supported by SQLCover.
Hi @davidlyes,
SQLCover already has the ability to take a filter to exclude specific objects ("sp_.*diagram" would work for you) but there is nothing in the SQLTest ui to allow you to specify anything that can be passed to SQLCover.
I am a bit hesitant to add in specific filters because although you don't want to see these someone else might want to see them.
A half way measure might be to get SQLCover to look in a config file in a known place for additional filters, that way it will work with SQLTest and anywhere else.
So your options are:
Don't use the Redgate UI to generate your coverage reports
Ask Redgate to include the ability to add a filter as part of their UI
Wait for me to implement an additional config file for extra filters - to make sure people didn't get additional filters they didn't know about I would want these displayed on the output somehow so there are a few things to do here. (you would also need Redgate to use the updated version of the dll)
ed
Hi Ed.
there is no documentation is provided how to filter stored procedures while calculating code coverage if we use SQLCover. it would be great if you show one example for this. thanks in advance.
| gharchive/issue | 2019-10-30T20:04:01 | 2025-04-01T04:32:35.222136 | {
"authors": [
"GoEddie",
"davidlyes",
"kishorechikka"
],
"repo": "GoEddie/SQLCover",
"url": "https://github.com/GoEddie/SQLCover/issues/45",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
594384421 | Dealing with references
I'm wondering how godot::Reference class should be used inside the C++ code.
Is it more correct storing it into a pointer or a simple variable?
I'll clarify with an example:
class MyRef : public Reference {
GODOT_CLASS(MyRef, Reference)
public:
... // all needed code
};
class MyRefUse : public Node {
GODOT_CLASS(MyRefUse, Node)
MyRef * m_ref; // should it be a pointer or an simple field?
public:
... // _init, constructor, destructor
void set_myref(MyRef * ref) {
m_ref = ref;
}
MyRef * get_myref() {
return m_ref;
}
static void _register_methods() {
register_property<MyRefUse, MyRef*>("myref", &MyRefUse::set_myref, &MyRefUse::get_myref, nullptr);
... // register other methods
}
};
And both classes are registered.
Update: in case get_myref() returns a pointer to a copy of m_ref (then new MyRef(*m_ref)),
how the engine would manage the pointer delection? Will it delete the pointer normally as the reference count goes to zero, or will it result in memory leak? Thank you for a response
References are best used with a Ref<T> wrapper, as it will take care of handing refcounting for you. Otherwise, reference() and unref must be called manually when your code grabs or releases ownership.
However, there are known issues with Ref in the tracker, where leaks happen.
Thank you
As you explained, I followed your solution, but when I'm trying to create a Ref that instance destroys immediately the internal object (the Reference one), as the ref_count seems to be initialized to zero.
Which are the conditions to match in order to properly create a Reference?
At the moment there are the following methods:
void _init() member
A constructor which takes no arguments
A destructor
GODOT_CLASS(myclassname, Reference)
static void _register_methods() member
What have I forgotten?
Furthermore I tried with the approach mentioned in this thread but I seems
not working.
I don't know, I only assume references should work the same way as they do in the engine, but GDNative seems to have some quirks and bugs that aren't figured out yet
After some debugging I found the following things:
init_ref() returns true (I found the value by override and super-class call)
calling _new() erases directly the object, this is unexpected
contrarely to the Godot version, is_referenced() is not present, then I cannot check for its behaviour
initializing the object by pointer (new operator) the object is maintained but initializing the Ref with that pointer the program crashes
FInally Ref<T> works fine for instances initialized in a gdscript and then passed inside a gdnative c++ code.
| gharchive/issue | 2020-04-05T10:35:12 | 2025-04-01T04:32:35.232350 | {
"authors": [
"DaedricSpartan99",
"Zylann"
],
"repo": "GodotNativeTools/godot-cpp",
"url": "https://github.com/GodotNativeTools/godot-cpp/issues/388",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
363216369 | Не отображается сколько получего голов при понижении СГ
Перестало отображаться сколько Голосов получено в результате понижения
Кстати, где в новом интерфейсе кнопка понижения и информация о понижении?
Добавить в Кошельке рядом с кнопкой делегировать сг кнопку -Уменьшить силу голоса,
добавить окно с инфнормацией -Следующие понижение Силы Голоса возможно через 7 дней.
это предупреждение отображать и в кошельке
Дизайн
Дизайн страниц
Дизайн моб версии
Дубль https://github.com/GolosChain/tolstoy/issues/1474
Задача готова на песочнице
| gharchive/issue | 2018-09-24T16:11:00 | 2025-04-01T04:32:35.239078 | {
"authors": [
"jevgenika",
"litrbooh"
],
"repo": "GolosChain/tolstoy",
"url": "https://github.com/GolosChain/tolstoy/issues/874",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
2488431895 | Wallet Detail - Action Summary & Action Logs for stewards
Business Requirement
For Direct Payments GoodCollectives, Each payment must trace back to a specific activity performed by the steward. This should be accessible from both the steward's wallet profile and the collective profile. As NFTs sometimes trigger multiple payments for different wallets, a new design must be created to account for and link to the NFTs Stewards were paid from, not necessarily the NFTs they hold.
Profile Page
A stewards/donor profile page should show all pools it participated in, either by donation, or by performed an action in a pool (or both).
It shows an 'impact-profile' card displaying aggregated data for any action (donation/stewards-action).
Next to this card will be shown cards per-pool a user participated in.
The block/section that shows 'actions' should link to the steward's Action Log page.
Detailed description
In the Action Log for stewards:
The treatment of Actions in the Summary should be updated to reflect the latest designs
The Action Log page (shown per steward) should be updated to link each action to all of the following:
The NFT that triggered the payment
The IPFS proof from that NFT
The Payment Transaction (minting) hash
Design Reference:
[ ] WIP - https://www.figma.com/design/ihw1PxBvLxacTHnN2aj4lC/3.-Product?node-id=19677-19070&t=5UFEZcJpM3XMuCcA-1
[ ] Need to choose treatment for Actions button in Wallet Detail (see options below)
@decentralauren
Screenshots:
Figma link:
https://www.figma.com/design/ihw1PxBvLxacTHnN2aj4lC/3.-Product?node-id=23077-16481&t=EVMRRReMLpG3KnkB-1
| gharchive/issue | 2024-08-26T19:31:07 | 2025-04-01T04:32:35.247532 | {
"authors": [
"SanaJamm",
"decentralauren"
],
"repo": "GoodDollar/GoodCollective",
"url": "https://github.com/GoodDollar/GoodCollective/issues/221",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
533177904 | User unable to delete feed card and has duplicates
https://app.birdeatsbug.com/sessions/44HU1HrSQrrH-D3G0Wvwa
It seems this happens when there are also duplicates of card
why do we have dups? maybe its related to gun return empty result on first get. we have two safe guards against dups
in startSystemFeed we check that firstVisitAppDate is null
in enqueueTx we check id doesnt exists
Definition of Done
[ ] why do we have dups?
[ ] delete should work even if we have dups
@sirpy
I tried to reproduce this issue many times in different ways without success. Tested it locally, on gooddev, goodqa. I tried to come with magicLink already being logged in, reproduce it in combination with #1017 bug when getFeedPage starts fetching the feeds in the loop, and other basic users flow situations. So every time I have no dups in my feed list.
Then I went through all of the code related to the add, update, delete feed to find some place in code where such issue (feed dups) might happen. So after the investigating and tests (also @AlexeyKosinski was involved), I dont understand how is that possible to have dups in the feed list. Maybe it is some very specific case for that user, Might he/she did some things which is unusual for a regular user.
Also, I want to admit that since we are on that project we didn't face such issue at all.
@yaroslav-fedyshyn-nordwhale like i've explained, i believe the issue is from the gun bug
if you try to get a user field for the first time, it might return an empty object.
i think i've found how to recreate this.
open an incongnito tab, and use the seed phrase, i did it and it created another claim your good dollar cards.
it happens because indexeddb in this case is empty, and when fetching userproperties it will have to be first fetched from the server, so on first fetch it will be empty.
@sirpy
What should we do with this issue? If this issue is related to the gun indexedDB setup, then it shouldn't be relevant anymore.
@AnastasiiaOdnoshevna Please, check it again. It should be fixed after #974
Checked on Dev environment
The issue is not reproduced
| gharchive/issue | 2019-12-05T07:38:00 | 2025-04-01T04:32:35.252986 | {
"authors": [
"AnastasiiaOdnoshevna",
"Nordwhale",
"sirpy",
"yaroslav-fedyshyn-nordwhale"
],
"repo": "GoodDollar/GoodDAPP",
"url": "https://github.com/GoodDollar/GoodDAPP/issues/1027",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
607810660 | (BUG) Cards icons: when switching icons from animation to static ones there's a change in their position
We had this issue before:
Bug) Dashboard - Feed card: Claim button Static/Animation bug #1385
@yaroslav-fedyshyn-nordwhale
@serdiukov-o-nordwhale
Cards icons: The animation is displayed without any jumps and changing of position
Checked on Dev env (V 0.19.5-0)
The device used:
Desktop// Windows 10 x64 // Google Chrome 81.0.4044.129
Video:https://www.screencast.com/t/zqeEGUxSPsS
| gharchive/issue | 2020-04-27T19:53:42 | 2025-04-01T04:32:35.255420 | {
"authors": [
"AnastasiiaOdnoshevna",
"LiavGut"
],
"repo": "GoodDollar/GoodDAPP",
"url": "https://github.com/GoodDollar/GoodDAPP/issues/1672",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
938916097 | [BUG] The counter has incorrect format on the Claim page
Pre-conditions:
The user is logged in
The user has claimed
Steps to reproduce:
Open the https://wallet.gooddollar.org/ page half of the hour before restarting the claim cycle
Go to the Claim page
Pay attention to the counter
Expected result: The counter should have the format hh.mm.ss, even if the hours or minutes 00
Actual result: The counter has incorrect format on the Claim page
Environment: PROD 1.29.0
Devices list:
Windows 10 // Google Chrome 91
Attachment:
https://www.screencast.com/t/TavAhptt
@julianpolcode
The issue is not reproduced on web PROD. The counter has correct format on the claim page.
Checked on the: PROD 1.29.1
Windows 10 // Google Chrome v91
Attachment:
https://www.screencast.com/t/0bAq9qh837nl
| gharchive/issue | 2021-07-07T13:49:53 | 2025-04-01T04:32:35.261286 | {
"authors": [
"iLystopad",
"julianpolcode"
],
"repo": "GoodDollar/GoodDAPP",
"url": "https://github.com/GoodDollar/GoodDAPP/issues/3316",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
491654195 | Feed - cards: the long separator line should be aligned between the icon and the G$ amount.
Feed - cards: the long separator line should be aligned between the icon and the G$ amount.
@LiavGut a picture is worth a thousand words
We're waiting for Liav's approve or instructions.
@Nordwhale to approve what? where's the pull request/changes? pictures showing how it is fixed
@sirpy We didn't do any changes, we check on different devices and the line looks much better than on Liav's screenshot. The line is not in the middle on some resolutions, a little bit upper, then expected, but looks not bad.
I contacted Liav to approve if current version is OK or still need changes.
@LiavGut answered
"It seems that we have some issues with the design of the cards because the design does not align with the wireframes design.
Elements are not aligned, the margins are not the same, etc.
I'll talk with Hadar and think about how to work on that."
Here are current screenshots without any changes.
http://joxi.ru/nAyd7wjUgKeXN2
http://joxi.ru/Dr8xbkLHoR5kEr
http://joxi.ru/zANOxLlSv30lkr
@Nordwhale
The last screen shot is chrome emulating new iphones, this is not a good test since he doesnt emulate the high pixel density so fonts look bigger. This needs to be tested on ios simulator.
@LiavGut Can you share on which device you saw the line not in the middle?
@sirpy I also tested that on the Chrome Emulator.
In any case, the lines should be in the middle on any device (like the iPhone 6/7 that in the print screen), and the avatar photo and icon should be at same hight.
so the line is supposed to be ok. according to the images provided.
icon+image alignment were fixed in a different story and tested by QA.
@LiavGut so i'm closing this one.
| gharchive/issue | 2019-09-10T12:53:51 | 2025-04-01T04:32:35.266879 | {
"authors": [
"LiavGut",
"Nordwhale",
"sirpy"
],
"repo": "GoodDollar/GoodDAPP",
"url": "https://github.com/GoodDollar/GoodDAPP/issues/566",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
983556770 | Staking guide
[ ] write a guide on gitbook
[ ] include images from etherscan.io / fuse explorer
[ ] part 1 aave/compound
[ ] part 2 G$ stakin
https://docs.gooddollar.org/support-gusd/stake
| gharchive/issue | 2021-08-31T07:53:50 | 2025-04-01T04:32:35.268969 | {
"authors": [
"sirpy",
"tomerGD"
],
"repo": "GoodDollar/GoodProtocol",
"url": "https://github.com/GoodDollar/GoodProtocol/issues/180",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2540023459 | ⚠️ Palworld Tunnel (Connection URL) has degraded performance
In d8755fe, Palworld Tunnel (Connection URL) ($PG_PANEL/tstat.txt) experienced degraded performance:
HTTP code: 200
Response time: 19 ms
Resolved: Palworld Tunnel (Connection URL) performance has improved in 268a1e0 after 11 minutes.
| gharchive/issue | 2024-09-21T06:46:54 | 2025-04-01T04:32:35.273032 | {
"authors": [
"athaller"
],
"repo": "GoodVibesGaming/upptime",
"url": "https://github.com/GoodVibesGaming/upptime/issues/26",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
76739233 | Author release notes
Following on from #152, I've tagged and published a new release to https://www.npmjs.com/package/accessibility-developer-tools. The next thing we'll want to do is author release notes.
If we're short on time, we can just generate a changelog with git log. If however we would prefer to do a complete write-up that also makes sense. @alice do you have a preference here? I can get the former up pretty soon but won't have time to help with the latter until after I/O.
Thanks for getting these published, @alice! :star: They look ace!
| gharchive/issue | 2015-05-15T14:15:37 | 2025-04-01T04:32:35.287623 | {
"authors": [
"addyosmani"
],
"repo": "GoogleChrome/accessibility-developer-tools",
"url": "https://github.com/GoogleChrome/accessibility-developer-tools/issues/153",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
370333205 | fix: handle negative area results in computeQuadArea
This patch fixes a case in which computeQuadArea calculates the area size correctly, but returns the area as a negative number.
This occurs when DOM.getContentQuads returns quads in a specific order.
E.g. the array: [ { x: 463, y: 68.5 },{ x: 437, y: 68.5 },{ x: 437, y: 94.5 },{ x: 463, y: 94.5 } ] received area size of -676.
Thanks for the PR! Do you have a test case that covers this?
My pleasure :-)
There's no problem of creating a test case for this fix.
From what I see it has to depend only on the computeQuadArea function in order to be hermetic.
Because, if the test will depend on DOM.getContentQuads(which is the normal flow), we wouldn't know if it passes because of this fix, or because of a change in the quad order returning from getContentQuads.
Do you write tests in your package that test a single function?
What do you think should be added?
A test for this would set up some html/css to produce quads in the right order. Then it would call elementHandle.click on the element with the strange quads, and verify that it indeed was clicked. You can find similar tests in input.spec.js
that's what I thought,
I was concerned that an element like that might stop producing strange quads order in future versions of chromium(because of possible change in DOM.getContentQuads),
which will cause my test not to be hermetic in the long run.
anyway, i'll add a test for this case and add it.
thanks!
Hi, added a test case to the PR.
It seems the node8(macOS) coverage has failed, anything I can do concerning that?
My pleasure,
Thanks!
| gharchive/pull-request | 2018-10-15T20:36:02 | 2025-04-01T04:32:35.312514 | {
"authors": [
"JoelEinbinder",
"zeevrosental"
],
"repo": "GoogleChrome/puppeteer",
"url": "https://github.com/GoogleChrome/puppeteer/pull/3413",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
287501459 | Some pages can't load
When trying to load our homepage we're getting the following error (online version doesn't load either)
{ Error: The value of property "payload" is longer than 1048487 bytes.
at /Users/paulhachmang/Sites/reachdigital.nl/rendertron/node_modules/grpc/src/client.js:554:15 code: 3, metadata: Metadata { _internal_repr: {} } }
This happens when I try to load it prpl server, but doesn't happen when I access an URL directly. Not sure how the request differs (no idea how I should debug something like that)
Hi @paales,
I'm unable to reproduce.
Running curl https://reachdigital.nl/magento-2/magento-2-community-webshop -A 'googlebot' works for me. Size looks correct to me too. ~300KB raw, ~34KB GZIP.
Closing as it seems it works? Feel free to reopen/comment that its still broken.
Have you tried running it through the middleware?
Loading the page via the above rendertron URL, everything works as expected, but if I run it through the middleware I'm getting the error.
Does the curl request I pasted above not execute through the middleware?
@samuelli You're right, i've had to disable all caching on the online environment, which makes sure it works. Was just trying to get it to work on render-tron.appspot.com, but that whole service seems to be offline :(
https://render-tron.appspot.com/render/https://reachdigital.nl/magento-2/magento-2-community-webshop?wc-inject-shadydom=true http://cloud.h-o.nl/pQNY
| gharchive/issue | 2018-01-10T16:44:00 | 2025-04-01T04:32:35.317067 | {
"authors": [
"paales",
"samuelli"
],
"repo": "GoogleChrome/rendertron",
"url": "https://github.com/GoogleChrome/rendertron/issues/159",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
60656770 | Add es6 extended object literals sample
I hope I hit the tone of the samples.
@addyosmani Please review
@addyosmani Please review, again ;) I tried to make it less contrived (thanks for the link).
I also remove the invocation of the generator. It felt like it added to much noise and iterator/generator usage is covered in a separate sample.
Sounds good to me. Done
That’s what happens when you just copy-and-paste from an older commit :-/ Fix’d.
:+1:
| gharchive/pull-request | 2015-03-11T13:19:37 | 2025-04-01T04:32:35.319078 | {
"authors": [
"addyosmani",
"surma"
],
"repo": "GoogleChrome/samples",
"url": "https://github.com/GoogleChrome/samples/pull/91",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
54208904 | Grunt errors with new Chrome App js files
Run Grunt locally and I see these errors:
Running "jstdPhantom" task
Starting jstd server....
Starting PhantomJS...
Running tests...
setting runnermode QUIET
Safari: Reset
Safari: Reset
.............................
Total 29 tests (Passed: 29; Fails: 0; Errors: 0) (17.00 ms)
Safari 534.34 Linux: Run 31 tests (Passed: 29; Fails: 0; Errors 2) (17.00 ms)
error loading file: /test/js/appwindow.js:62: ReferenceError: Can't find variable: randomString
error loading file: /test/js/background.js:15: ReferenceError: Can't find variable: chrome
Total Passed: 29, Fails: 0
PhantomJS threw an error:
Done, without errors.
I have a fix for this ready in a local repo based on @jiayliu 's unittest/refactor work. Will move forward once that work is in master.
| gharchive/issue | 2015-01-13T15:46:26 | 2025-04-01T04:32:35.336331 | {
"authors": [
"chuckhays"
],
"repo": "GoogleChrome/webrtc",
"url": "https://github.com/GoogleChrome/webrtc/issues/367",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
1173797987 | Migrating the migration guides to the new docs site
Could you please document the path to upgrade from v5 to v6 as you did in the old docs pages ?
Thanks :)
Good point! @malchata, just want to get this on your radar.
The following links all still work in the meantime:
https://developers.google.com/web/tools/workbox/guides/migrations/migrate-from-v5
https://developers.google.com/web/tools/workbox/guides/migrations/migrate-from-v4
https://developers.google.com/web/tools/workbox/guides/migrations/migrate-from-v3
https://developers.google.com/web/tools/workbox/guides/migrations/migrate-from-v2
https://developers.google.com/web/tools/workbox/guides/migrations/migrate-from-sw
Thanks a lot :)
I think we should have a separate ToC for migration guides, @jeffposnick, something that lives at a URL like developer.chrome.com/docs/workbox/migration. You think that might work?
Absolutely. That sounds 👍
(And no rush, since the old pages are still hosted and discoverable. I know that there are other priorities, but I wanted to keep this open so that we don't lose track.)
Migration guides have been migrated to developer.chrome.com: https://developer.chrome.com/docs/workbox/migration/
| gharchive/issue | 2022-03-18T16:45:03 | 2025-04-01T04:32:35.341913 | {
"authors": [
"jeffposnick",
"malchata",
"planeth44"
],
"repo": "GoogleChrome/workbox",
"url": "https://github.com/GoogleChrome/workbox/issues/3044",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
1732475225 | remove tlsEnd from FetchTimingInfo in network module
It is not present in the spec: https://w3c.github.io/webdriver-bidi/#type-network-FetchTimingInfo
Bug: #765
This can only be merged once WPT is updated (https://github.com/GoogleChromeLabs/chromium-bidi/pull/787).
| gharchive/pull-request | 2023-05-30T15:10:07 | 2025-04-01T04:32:35.343609 | {
"authors": [
"thiagowfx"
],
"repo": "GoogleChromeLabs/chromium-bidi",
"url": "https://github.com/GoogleChromeLabs/chromium-bidi/pull/789",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
525476337 | [docs] How to avoid fetching different assets during fluctuations
Consider this scenario:
My phone is connected to WiFi(4g), the power goes off, and my mobile data(2g) kicks into action. If there were multiple assets fetched for 4g, all their low quality equivalents will now be fetched.
This will also happen when I am travelling between places as my data speed keeps fluctuating.
I am not sure what the solution to this problem is but I believe we shouldn't be replacing assets that are already fully loaded.
This is a good gotcha to raise awareness around. We have thought about this and documented one pattern for addressing over-fetching using Service Workers in https://github.com/GoogleChromeLabs/adaptive-loading/tree/master/cra-network-aware-only-if-cached-loading.
I'm going to rename this issue and keep it open as a reminder to add a docs or wiki entry about how to address this particular consideration in a little more detail.
@addyosmani
I think we have one more which is not landed yet.
https://github.com/GoogleChromeLabs/adaptive-loading/pull/27
| gharchive/issue | 2019-11-20T04:35:49 | 2025-04-01T04:32:35.346570 | {
"authors": [
"addyosmani",
"anton-karlovskiy",
"astronomersiva"
],
"repo": "GoogleChromeLabs/react-adaptive-hooks",
"url": "https://github.com/GoogleChromeLabs/react-adaptive-hooks/issues/28",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
317413626 | WIP : multi panels (for question comment purpose)
Just making PR because commenting UI is better on PR.
Approach I took was to accept any HTML elements as children of <multi-panels>
<multi-panels> will simply take even indexed elements as heading, and element following(thus odd index) as content. Allowing user of this component more freedom in markup.
note: I don't really understand TypeScript(yet) and this might very well be not great code(yet) !
Status :
Current code has all refraction of the previous code review (minus <details> consideration).
This component currently assumes there is always a set of heading and content.
and expands corresponding content via nextElementSibling.
If developer using this component does something like below (per @jakearchibald's review) and user click on heading before content is created, it should be fine (simply nonextElementSibling)
multiPanel.append(parse(`<h1>Hello!</h1>`));
const data = await fetch(content).then(r => r.json());
multiPanel.append(parse(data.html));
however, if developer inserts element in async way like above in the middle of multi panel children... it will mess up which panel to be 'expanded'.
Not sure expecting dev user to "always append 2 elements together" is a reasonable expectation (code will be simpler !) or we should bullet proof this with unique ID approach and control element gets expanded based on ID.
note: currently unique id is assigned just for A11y but not for control.
closing this in lieu of #95
| gharchive/pull-request | 2018-04-24T22:11:21 | 2025-04-01T04:32:35.350389 | {
"authors": [
"kosamari"
],
"repo": "GoogleChromeLabs/squoosh",
"url": "https://github.com/GoogleChromeLabs/squoosh/pull/24",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
810340873 | [livy] #871 | Python3 support
fixes #871
/gcbrun
Thank you for contribution!
Thank you for contribution!
Thanks for quick merge :)
We have weekly releases, so it should be out by end of the next week.
| gharchive/pull-request | 2021-02-17T16:31:38 | 2025-04-01T04:32:35.353265 | {
"authors": [
"medb",
"wsmolak"
],
"repo": "GoogleCloudDataproc/initialization-actions",
"url": "https://github.com/GoogleCloudDataproc/initialization-actions/pull/874",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
135847589 | Explain how to add lib to import path
Fixes #9
LGTM.
| gharchive/pull-request | 2016-02-23T19:57:12 | 2025-04-01T04:32:35.357025 | {
"authors": [
"jonparrott",
"waprin"
],
"repo": "GoogleCloudPlatform/appengine-django-skeleton",
"url": "https://github.com/GoogleCloudPlatform/appengine-django-skeleton/pull/10",
"license": "bsd-3-clause",
"license_type": "permissive",
"license_source": "bigquery"
} |
218332558 | TESTING
kokoro presubmit testing, don't review, don't merge.
Codecov Report
Merging #365 into master will not change coverage.
The diff coverage is n/a.
@@ Coverage Diff @@
## master #365 +/- ##
=======================================
Coverage 62.61% 62.61%
=======================================
Files 65 65
Lines 1701 1701
Branches 254 254
=======================================
Hits 1065 1065
Misses 530 530
Partials 106 106
Continue to review full report at Codecov.
Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 76b51a8...1488974. Read the comment docs.
| gharchive/pull-request | 2017-03-30T21:24:55 | 2025-04-01T04:32:35.361906 | {
"authors": [
"akerekes",
"codecov-io"
],
"repo": "GoogleCloudPlatform/appengine-plugins-core",
"url": "https://github.com/GoogleCloudPlatform/appengine-plugins-core/pull/365",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
1272425495 | Option to scan entire table for Cloud Data Loss Prevention and Dataflow
For Cloud Data Loss Prevention and Dataflow solution on this page: https://cloud.google.com/architecture/automatically-apply-sensitivity-tags-in-data-catalog
Would be helpful to also have the option to scan entire database table.
It's already implemented.
use --sampleSize=0
| gharchive/issue | 2022-06-15T15:45:24 | 2025-04-01T04:32:35.363504 | {
"authors": [
"anantdamle",
"jingcyang3"
],
"repo": "GoogleCloudPlatform/auto-data-tokenize",
"url": "https://github.com/GoogleCloudPlatform/auto-data-tokenize/issues/55",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
275143701 | Supply GCS credentials through the command line
Hi,
To my understanding in order to execute a distcp command to upload data from our local Hadoop cluster into GS I have to define the following parameter google.cloud.auth.service.account.json.keyfile and make sure all worker nodes in my cluster have the keyfile available locally.
Since I want to allow multiple users across the org to upload data to their own bucket in GS using their own credentials, I want to be able to send GS credentials through the distcp command line like we currently do when uploading data to S3.
Is there a solution in the pipeline which should enable this functionality?
Best,
Eyal
Now you can implement an AccessTokenProvider interface. In your case, you can implement it to read from the command line.
Hi @cyxxy,
Can you elaborate more on how do I accomplish it when wanting to only execute Distcp command with variables?
Or something like
hadoop distcp -Dfs.gs.accesstoken=xxxx ...., which also requires implementation.
Actually you can specify user credentials in command line, but due to a bug it is not working for distcp at the moment. There is a similar issue https://github.com/GoogleCloudPlatform/bigdata-interop/issues/62, so I'm going to close this one and track the work in https://github.com/GoogleCloudPlatform/bigdata-interop/issues/62.
You can use client credentials in the following way:
hadoop distcp -Dfs.gs.auth.client.id=<client-id> -Dfs.gs.auth.client.secret=<client-secrect> -Dfs.gs.auth.service.account.enable=false gs://... gs://...
And you can get the client credentials from https://console.cloud.google.com/apis/credentials, click on "Create credentials", select "OAuth client ID" then "Other".
similar to the s3a inline credential feature ("hdfs fs -ls s3a://key:secret@my-bucket/")
you don't want that. Leaks secrets through all the logs. This is why S3A tells you off for trying, and it is up for discussion as to whether to disable it. The core defensible distcp use case was when copying across accounts, but with per-bucket secrets, that can be done other ways.
Thanks for the info guys.
So how do you suggest enabling a cross org solution where each user can upload to its bucket (using distcp) without sharing the same key file for all users saved into the HDFS config file?
Each user can specify their own key file in --files argument (i.e. hadoop --files=<KEY_FILE> distcp ...), Hadoop will take care of distributing it on cluster nodes, and theoretically it could work.
We didn't test this approach though, so it will be nice if you can give it a try and post results here.
Where did you get --files config parameter from?
How will Hadoop know to distribute local user files to hdfs?
This is standard hadoop command parameter. It should be -files though (was with excess dash before).
| gharchive/issue | 2017-11-19T09:00:02 | 2025-04-01T04:32:35.374336 | {
"authors": [
"cyxxy",
"eyalba",
"medb",
"steveloughran"
],
"repo": "GoogleCloudPlatform/bigdata-interop",
"url": "https://github.com/GoogleCloudPlatform/bigdata-interop/issues/75",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
151002112 | Document service account keys
Add documentation to the README for the use of service account keys for
authorization as an alternative to logging in with the gcloud cli.
Fixes #248
@ofrobots @justinbeckwith PTAL.
LGTM
LGTM.
| gharchive/pull-request | 2016-04-25T23:00:37 | 2025-04-01T04:32:35.378846 | {
"authors": [
"JustinBeckwith",
"matthewloring",
"ofrobots"
],
"repo": "GoogleCloudPlatform/cloud-trace-nodejs",
"url": "https://github.com/GoogleCloudPlatform/cloud-trace-nodejs/pull/249",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1173003131 | Migrate "build and publish" setup for windows-install-media to compute-image-tools
Problem: Currently it's still in internal codebase. It's covered by neither old build-publish framework nor concourse now. Thus, it can't be built and released by our release mechanism now.
Ideally we should release new install media with our release mechanism, and keep adding new updates to the media.
Let's migrate to "compute-image-tools" so it can be covered by concourse later.
Next: add entry in "guest-test-infra" accordingly.
All are copied from google3 without modification except for daisy_workflows/build-publish/windows_media/windows-install-media.publish.json
/hold
Please move all of the build files and workflow to compute-image-tools/daisy_workflows/image_build/windows/
Is there a reason we have upgrade.ps1 in folders? I'd sugguest these files should be in the folder and be renamed to upgrade-osversion.ps1. In the daisy workflow or prepare_install_media.ps1 they can be placed into the correct location at upgrade.ps1.
Either way works. I think the original thinking is that if there is any else version-specific files, or there is any file which is only valid for specific versions, it's easier to manage by folders. Right now, the code will copy all the files from the subfolder simply, which is easy to implement and straight forward.
Updated accordingly except for the rename thing, which is tracked by backlog.
/unhold
| gharchive/pull-request | 2022-03-17T22:52:05 | 2025-04-01T04:32:35.382393 | {
"authors": [
"dntczdx"
],
"repo": "GoogleCloudPlatform/compute-image-tools",
"url": "https://github.com/GoogleCloudPlatform/compute-image-tools/pull/1888",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
273811126 | Add a python3 distroless image
This should help with GoogleCloudPlatform/distroless#111 and
bazelbuild/rules_docker#229.
A few of caveats:
This still needs to be uploaded to gcr.io, presumably by some automated
process that I'm not privy to.
This only tests importing a single module from the standard library
(as with the 2.7 image). I suspect there may be other dependencies
which are required for other modules in the standard library; it
would be nice to have tests for importing other modules which require
additional dependencies.
Since the distroless base image is currently based on Debian Jessie, this
uses version 3.4.2. Ideally, we'd update distroless to be based on
Stretch, and the newer versions of the dependent packages (across the
board). I suspect this would require some additional qualification
on Google's end.
re: the first point, you should be able to add it to the list here and it will be built and uploaded along with the other images
https://github.com/GoogleCloudPlatform/distroless/blob/master/BUILD#L10-L36
@hwright re: upload process, this is done by a cloud build specified in the yaml file you updated once the commit lands in master.
SGTM.
Also, sorry about the goofy history after rebasing. I could clean up the commits here on another branch, but I don't think it matters that much (github will squash them for merge, anyway).
Any next steps here?
Ping @dlorenc @r2d4
@duggelz FYI
To answer one of your questions, I've been sporadically working on creating Debian 8 packages for the Python 3.5 and 3.6 interpreters we use in App Engine Flex. Note that updating to Debian 9 still only gets us Python 3.5, not Python 3.6.
@duggelz Correct, though #135 is at least an attempt to get us that far.
| gharchive/pull-request | 2017-11-14T14:27:47 | 2025-04-01T04:32:35.388933 | {
"authors": [
"duggelz",
"hwright",
"mattmoor",
"r2d4"
],
"repo": "GoogleCloudPlatform/distroless",
"url": "https://github.com/GoogleCloudPlatform/distroless/pull/130",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
420578427 | Skip iot tests
They're broken.
Is there a tracking bug open to return to these at some point though? Cause otherwise they'll be lost in neverland.
The tracking bug is here: https://github.com/GoogleCloudPlatform/dotnet-docs-samples/issues/748
| gharchive/pull-request | 2019-03-13T15:52:54 | 2025-04-01T04:32:35.390464 | {
"authors": [
"SurferJeffAtGoogle",
"dzlier-gcp"
],
"repo": "GoogleCloudPlatform/dotnet-docs-samples",
"url": "https://github.com/GoogleCloudPlatform/dotnet-docs-samples/pull/759",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
334573669 | Improve the docstring for iter_foo().
The doc string of iter_foo() implies they are describing themselves as Foo Iterators, which they are not. It would be better as something like below. Also update documentation.
"""Get foo from GCP API."""
docs/latest/develop/dev/inventory.html
https://github.com/GoogleCloudPlatform/forseti-security/blob/stable/google/cloud/forseti/services/inventory/base/gcp.py#L595-L606
@create_lazy('compute', _create_compute)
def iter_images(self, projectid):
"""Image Iterator from gcp API call
Args:
projectid (str): id of the project to query
Yields:
dict: Generator of image resources
"""
for image in self.compute.get_images(projectid):
yield image
Updated docstring for iter_foo and fecth_foo #1702
| gharchive/issue | 2018-06-21T16:44:44 | 2025-04-01T04:32:35.403954 | {
"authors": [
"blueandgold",
"kssrini"
],
"repo": "GoogleCloudPlatform/forseti-security",
"url": "https://github.com/GoogleCloudPlatform/forseti-security/issues/1702",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
153475385 | Facet2
@briandealwis This doesn't actually work. When I call createDataModel it seems to throw a NullPointerException without much of a stack trace. I'm trying to find an example of configuring the jst.web facet
That's NPE is a bug with the definitions in .localserver. I'll get some fixes up ASAP.
OK, I finally managed to get a config set up for the web facet that the framework accepts. So far all this does is refrain from creating an extra web.xml deployment descriptor but it's a good checkpoint so PTAL.
Next step will be to configure the various paths for the webapp folder and the like.
So far all this does is refrain from creating an extra web.xml deployment descriptor but it's a good checkpoint so PTAL
So that's to avoid the web.xml in the WebContent/WEB-INF in favour of the one in src/main/webapp?
LGTM with the .internal removed
| gharchive/pull-request | 2016-05-06T15:23:22 | 2025-04-01T04:32:35.407005 | {
"authors": [
"briandealwis",
"elharo"
],
"repo": "GoogleCloudPlatform/gcloud-eclipse-tools",
"url": "https://github.com/GoogleCloudPlatform/gcloud-eclipse-tools/pull/144",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
120441179 | pubsub: prevent maxInProgress from being sent to the API
Fixes #985
@leibale can you give this a shot?
$ npm install --save stephenplusplus/gcloud-node#spp--pubsub-985
you fixed the #985 bug, but breaking something else (it might not be you, but it does not matter).. :(
at line 471 in pubsub/index.js you should use resp.name || subName (if the status is 409 and reuseExisting is true i got The name of a subscription is required. error).
That's definitely a new bug, thanks for catching that. Opening a new issue now. (PR welcome as always :))
| gharchive/pull-request | 2015-12-04T16:50:35 | 2025-04-01T04:32:35.409167 | {
"authors": [
"leibale",
"stephenplusplus"
],
"repo": "GoogleCloudPlatform/gcloud-node",
"url": "https://github.com/GoogleCloudPlatform/gcloud-node/pull/992",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
113834376 | Use google-auth-library
https://github.com/google/google-auth-library-php
https://github.com/google/google-auth-library-php/pull/84 helps get us on track with this
The changes were merged into master, closing this out.
| gharchive/issue | 2015-10-28T14:33:17 | 2025-04-01T04:32:35.410901 | {
"authors": [
"dwsupplee",
"stephenplusplus"
],
"repo": "GoogleCloudPlatform/gcloud-php",
"url": "https://github.com/GoogleCloudPlatform/gcloud-php/issues/2",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
57569317 | Entity documentation error
Line 37 of gcloud.datastore.entity.Entity says
Use :func:`gcloud.datastore.get` to retrieve an existing entity."
There is no such function. I think you mean gcloud.datastore.api.get().
Also, on a side note but slightly related, I find the package name gcloud.datastore.entity.Entity strange. Why not just gcloud.datastore.Entity? Seems that the currnet approach probably goes against PEP 423 and 25.
Both of these are available as aliases in 0.4.0 at least.
>>> import gcloud
>>> gcloud.datastore.api.get
<function get at 0x7f9113397488>
>>> gcloud.datastore.get
<function get at 0x7f9113397488>
>>> gcloud.datastore.entity.Entity
<class 'gcloud.datastore.entity.Entity'>
>>> gcloud.datastore.Entity
<class 'gcloud.datastore.entity.Entity'>
@rstuart85 You are both correct and "incorrect". I filed #632 and hopefully (if we move forward on it) something like Blob.create() will handle this correctly.
@rstuart85 You are both correct and "incorrect".
The classes and functions needed for datastore are loaded into the main namespace in __init__.py:
from gcloud import credentials
from gcloud.datastore import _implicit_environ
from gcloud.datastore.api import allocate_ids
from gcloud.datastore.api import delete
from gcloud.datastore.api import get
from gcloud.datastore.api import put
from gcloud.datastore.batch import Batch
from gcloud.datastore.connection import Connection
from gcloud.datastore.entity import Entity
from gcloud.datastore.key import Key
from gcloud.datastore.query import Query
from gcloud.datastore.transaction import Transaction
@pdknsk Thanks for the assist.
I find the package name gcloud.datastore.entity.Entity strange. Why not just gcloud.datastore.Entity? Seems that the currnet approach probably goes against PEP 423 and 25.
FWIW, on the terminology front:
gcloud and gcloud.datastore are packages
gcloud.datastore.entity is a module
gcloud.datastore.entity.Entity is a class. As @dh
:+1: Thanks @tseaver
| gharchive/issue | 2015-02-13T07:47:14 | 2025-04-01T04:32:35.416701 | {
"authors": [
"dhermes",
"pdknsk",
"rstuart85",
"tseaver"
],
"repo": "GoogleCloudPlatform/gcloud-python",
"url": "https://github.com/GoogleCloudPlatform/gcloud-python/issues/630",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
1344680370 | json_key authentication not supported
I Do the following in a github action. Looks like json_key authentication is not supported. Is there a way around this?
- uses: docker/login-action@v2
with:
registry: us-central1-docker.pkg.dev
username: _json_key
password: ${{ secrets.GOOGLE_SVC_ACCOUNT_JSON }}
- uses: 'docker://us-docker.pkg.dev/gcr-cleaner/gcr-cleaner/gcr-cleaner-cli'
with:
args: >-
-repo=us-central1-docker.pkg.dev/<myrepo>
-keep=3
-tag-filter-all=dev.+$
gives me the error
"failed to setup auther: google: could not find default credentials. See https://developers.google.com/accounts/docs/application-default-credentials for more information."
Related to this discussion: https://github.com/GoogleCloudPlatform/gcr-cleaner/pull/99#discussion_r940780976
@chris-volley in the mentioned pull request I've added a sample how to use this action with _json_key.
However this seems to be an issue, maybe @sethvargo can chime in.
| gharchive/issue | 2022-08-19T16:50:32 | 2025-04-01T04:32:35.419514 | {
"authors": [
"chris-volley",
"tfonfara"
],
"repo": "GoogleCloudPlatform/gcr-cleaner",
"url": "https://github.com/GoogleCloudPlatform/gcr-cleaner/issues/101",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
2354242237 | Terraform deployment refactoring and improvements
This change includes the followings changes
"tf" folder has been renamed to "infra" now contains all infrastructure management-related files.
Some automation scripts have been removed, while others have been moved to the "infra" folder.
Decoupling frontend and backend deployment allowing for redeployment without Terraform.
Gdrive configuration has been separated from Terraform scripts.
Please confirm you have tested successfully end to end in a fresh project before merging the PR.
If you're going to rename the directory from tf to infra, change the root README.md which discussed the paths and the locations of files -- I know that will change more but let's try to stay on top of them. If you know of other places in the repo that refer to the tf directory (maybe do a code search) please change those as well.
I have tested this and the automated deployment is working fine.
Tthere are no references to installation_scrips and tf folder (removed)
Looks good, other than comment https://github.com/GoogleCloudPlatform/genai-for-marketing/pull/67#discussion_r1653377104.
One last ask, please add a bit of documentation to the readme on how to redeploy.
| gharchive/pull-request | 2024-06-14T23:35:44 | 2025-04-01T04:32:35.425378 | {
"authors": [
"imp14a",
"michaelwsherman"
],
"repo": "GoogleCloudPlatform/genai-for-marketing",
"url": "https://github.com/GoogleCloudPlatform/genai-for-marketing/pull/67",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
386819091 | Increase default message size limit to 256MiB
By default gRPC rejects messages over 4MiB. However, Bigtable routinely produces messages much larger than that. Other Bigtable client libraries have change the default to 256 MiB, we should be consistent with them.
/cc: @sduskis
| gharchive/issue | 2018-12-03T13:31:56 | 2025-04-01T04:32:35.426456 | {
"authors": [
"coryan"
],
"repo": "GoogleCloudPlatform/google-cloud-cpp",
"url": "https://github.com/GoogleCloudPlatform/google-cloud-cpp/issues/1576",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
258478947 | Should fail
First commit of this pull request should break AppVeyor.
(Trying to test project generation.)
Hmm.. failed without any change required. Looks like everything is okay.
| gharchive/pull-request | 2017-09-18T13:27:00 | 2025-04-01T04:32:35.427385 | {
"authors": [
"jskeet"
],
"repo": "GoogleCloudPlatform/google-cloud-dotnet",
"url": "https://github.com/GoogleCloudPlatform/google-cloud-dotnet/pull/1462",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
319341813 | Requester Pays on NIO
There is an option on BlobSourceOption and BlobWriteOption to specify the billing project to use when accessing a requester pays bucket, but there doesn't seem to be a way to wire that through NIO.
Am I missing it ? And if not is this something that could be considered ?
A more specific suggestion would be to add the relevant options for userProject: BlobSourceOption, BlobGetOption, BlobWriteOption
to all calls to the Storage object such that the billing project is always explicitly set. Would there be any objection to that ? I'm happy to make a PR.
I'm thinking here, and where necessary in CloudStorageFileSystemProvider.
This is problematic for us because it prevents using of any requester pays enabled buckets through NIO.
@jean-philippe-martin any thoughts ? :)
That's a good idea, I would welcome a PR yes. Make sure to mention me in it so I can find it easily!
@Horneth Do you have a PR for this feature ready to go, or could you easily make one?
Code is now in, the feature should be available. Let me know how it goes, and mention me in a new bug if you run into any difficulty!
@hzyi-google, would you mind please closing this issue? I am not able to do it myself.
Sure. Thanks for contributing!
Thank you, @hzyi-google !
@jean-philippe-martin There are integration tests failing. Can you double check?
Yes I can look into it @hzyi-google . Do you have something a bit more specific? Do you mean the ITStorageTest test suite?
From my side, testCantCreateWithoutUserProject, testCantReadWithoutUserProject, testCantCopyWithoutUserProject and testFileExistsRequesterPaysNoUserProject are failing (the last one is an error).
@jean-philippe-martin Yes you are right. Our testing project do have the permission to the method in resourcemanager api. And the test with errors is for the same reason. I'll create a separate github issue for this and ignore the tests for now. Thanks for the clarification :)
@hzyi-google great, I'm glad the mystery is solved!
Hopefully there is some way you can use a project without these permissions, so you can restore the tests on your side. Or split the integration tests into two parts, one with that permission and one without.
| gharchive/issue | 2018-05-01T21:54:41 | 2025-04-01T04:32:35.445945 | {
"authors": [
"Horneth",
"droazen",
"hzyi-google",
"jean-philippe-martin"
],
"repo": "GoogleCloudPlatform/google-cloud-java",
"url": "https://github.com/GoogleCloudPlatform/google-cloud-java/issues/3221",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
245886954 | Use HTTPS links and redirects for docs
Replaces https://github.com/GoogleCloudPlatform/google-cloud-java/pull/2200
cc @pongad because his PR https://github.com/GoogleCloudPlatform/google-cloud-java/pull/2267 will conflict with this
Changes Unknown when pulling 7d159fddcbc35d80aa6e78378d12a0992c8708fa on tswast-patch-1 into ** on master**.
Could you sync & resolve conflicts?
Are those Travis failures something I should worry about? I'm having trouble parsing the build logs.
Ignore the oraclejdk7 failure (I'm going to remove that shortly). Also ignore the appveyor/branch failure - that ran integration tests, and the failures are due to tests that have since been turned off in master. You're good to merge.
| gharchive/pull-request | 2017-07-26T23:47:46 | 2025-04-01T04:32:35.449652 | {
"authors": [
"coveralls",
"garrettjonesgoogle",
"tswast"
],
"repo": "GoogleCloudPlatform/google-cloud-java",
"url": "https://github.com/GoogleCloudPlatform/google-cloud-java/pull/2280",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
184036402 | Support Out-File in the GCS provider
Out-File seems pretty useful. I was saddened to see it wasn't supported by the GCS provider.
PS gs:\gcs-folder-1564363309\> "Test file contents" | Out-File test.txt
Out-File : Cannot open file because the current provider (Google.PowerShell\GoogleCloudStorage) cannot open a file.
At line:1 char:24
+ "Test file contents" | Out-File test.txt
+ ~~~~~~~~~~~~~~~~~
+ CategoryInfo : InvalidArgument: (:) [Out-File], PSInvalidOperationException
+ FullyQualifiedErrorId : ReadWriteFileNotFileSystemProvider,Microsoft.PowerShell.Commands.OutFileCommand
I assume there isn't much additional work we need to do, since Set-Content is already supported in the provider?
Bummer, but I guess it makes sense since the -File noun should only apply to the FileSystem provider. I guess the workaround here is to use Set-Content instead.
As for the linked bug, #374, you'd have to rewrite the expression to not use pipe redirection operators, and instead just use strings concatenation and Get-Content / Set-Content.
@SurferJeffAtGoogle FYI
Copy-Item also fails when I try copy between the file system and gcs. Is the cause the same?
I suspect so.
PS C:\Users\Jeffrey Rennie> copy-item env:GOOGLE_APPLICATION_CREDENTIALS creds.env
copy-item : Source and destination path did not resolve to the same provider.
At line:1 char:1
+ copy-item env:GOOGLE_APPLICATION_CREDENTIALS creds.env
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo : InvalidArgument: (System.Collecti...[System.String]:Collection`1) [Copy-Item], PSArgumen
tException
+ FullyQualifiedErrorId : CopyItemSourceAndDestinationNotSameProvider,Microsoft.PowerShell.Commands.CopyItemComman
d
That's a shame.
| gharchive/issue | 2016-10-19T18:17:25 | 2025-04-01T04:32:35.453073 | {
"authors": [
"SurferJeffAtGoogle",
"chrsmith"
],
"repo": "GoogleCloudPlatform/google-cloud-powershell",
"url": "https://github.com/GoogleCloudPlatform/google-cloud-powershell/issues/352",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
199681078 | slow download
I am trying to download a 400M gcs file. I am using https://github.com/GoogleCloudPlatform/google-cloud-python/blob/ce6756fbe3633c74fd742567654565147628f4ba/storage/google/cloud/storage/blob.py. I noticed that by default my download looks to be chunked due to this setting here
https://github.com/GoogleCloudPlatform/google-cloud-python/blob/master/core/google/cloud/streaming/transfer.py#L46
As a result downloading the file results in 400 calls to GCS which significantly slows down the download.
Is there some clean way I can override that when using blob.download_to_file?
Thanks for reporting. This is actually deeper than it seems. The "correct" fix is for u to remove the (non-public) google.streaming stuff that this relies on and get a better chunking story (that doesn't rely on httplib2).
For a fix that works right now, you can duplicate the source but pass chunksize to Download. Also, gsutil (the CLI tool) has a very optimized strategy for fast downloads.
@lukesneeringer says this is blocked on httplib2 work.
The correct solution is blocked on #1998.
What would be the benefits and drawbacks of increasing the default, though? Would it be okay to make the default chunk size 10 MB instead of 1 MB?
I think gcs has a way of Auto detecting what's the best chunk size, so I'd
ask them. I know they have solutions for this issue
On Mar 17, 2017 11:30, "Luke Sneeringer" notifications@github.com wrote:
The correct solution is blocked on #1998
https://github.com/GoogleCloudPlatform/google-cloud-python/issues/1998.
What would be the benefits and drawbacks of increasing the default,
though? Would it be okay to make the default chunk size 10 MB instead of 1
MB?
—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub
https://github.com/GoogleCloudPlatform/google-cloud-python/issues/2927#issuecomment-287435837,
or mute the thread
https://github.com/notifications/unsubscribe-auth/AD3X0_ilSzH8HsAgfP3ZdUhkG0L6g76xks5rmtE5gaJpZM4Lez3I
.
I think this is basically a duplicate of #2222
@dhermes Is this easier now that #1998 is done?
Not easier or harder. AFAIK there is no perfect magic chunking answer, @thobrla has said before that downloading in a single request (vs. chunks) is almost always the right answer
| gharchive/issue | 2017-01-09T22:32:15 | 2025-04-01T04:32:35.464257 | {
"authors": [
"bjwatson",
"dhermes",
"evanj",
"lukesneeringer",
"pdudnik"
],
"repo": "GoogleCloudPlatform/google-cloud-python",
"url": "https://github.com/GoogleCloudPlatform/google-cloud-python/issues/2927",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
271154525 | PubSub: Synchronous pull
After the API redesign it no longer seems possible to perform a synchronous pull on a PubSub subscription. Is the recommended way of accomplishing this in google-cloud-pubsub v0.29.0 to use PublisherClient via the pull method directly or to write my own Policy, or some other way?
@mwilliammyers The API is now async/callback based: the intended usage is for the app developer to write a callback function which handles each message.
Duplicate of #4338.
| gharchive/issue | 2017-11-04T01:01:16 | 2025-04-01T04:32:35.467184 | {
"authors": [
"mwilliammyers",
"tseaver"
],
"repo": "GoogleCloudPlatform/google-cloud-python",
"url": "https://github.com/GoogleCloudPlatform/google-cloud-python/issues/4344",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
311949836 | Spanner: TypeError: '' has type str, but expected one of: bytes
OS type and version: ubuntu xenial
Python version and virtual environment information python --version: python 3.6
google-cloud-python version: https://github.com/GoogleCloudPlatform/google-cloud-python/releases/tag/spanner-1.3.0
Stacktrace if available:
File "/tmp/par__22_www_server.par/__main__/third_party/google_cloud/current/xenial/google/cloud/spanner_v1/streamed.py", line 140, in __iter__
self._consume_next() # raises StopIteration
File "/tmp/par__22_www_server.par/__main__/third_party/google_cloud/current/xenial/google/cloud/spanner_v1/streamed.py", line 114, in _consume_next
response = six.next(self._response_iterator)
File "/tmp/par__22_www_server.par/__main__/third_party/google_cloud/current/xenial/google/cloud/spanner_v1/snapshot.py", line 51, in _restart_on_unavailable
iterator = restart(resume_token=resume_token)
File "/tmp/par__22_www_server.par/__main__/third_party/google_cloud/current/xenial/google/cloud/spanner_v1/gapic/spanner_client.py", line 662, in execute_streaming_sql
partition_token=partition_token,
TypeError: '' has type str, but expected one of: bytes
Are you iterating over your own data?
Could you convert '' to b''?
which param for that method i should convert '' to b''?
the only thing changed recently is the partition_token, whose type is byte but i don't use that one. https://github.com/GoogleCloudPlatform/google-cloud-python/blame/spanner-1.3.0/spanner/google/cloud/spanner_v1/gapic/spanner_client.py#L662
@jsimonweb FYI
@yixizhang We need enough information to be able to reproduce the issue. Can you post a snippet / gist of your code, with any sensitive data redacted / replaced?
@yixizhang can you confirm if you are still experiencing this issue? Thanks!
I can't reproduce it and I haven't experienced the same error ever since. But by reading the code the other I think this line could be buggy.
https://github.com/GoogleCloudPlatform/google-cloud-python/blob/master/spanner/google/cloud/spanner_v1/snapshot.py#L39
That the default value should not be a str, but rather b''.
Hope that helps.
@yixizhang Thanks very much for the follow-up: I can confirm that the initial resume_token should be bytes, rather than text.
| gharchive/issue | 2018-04-06T12:02:20 | 2025-04-01T04:32:35.473278 | {
"authors": [
"chemelnucfin",
"danoscarmike",
"jabubake",
"tseaver",
"yixizhang"
],
"repo": "GoogleCloudPlatform/google-cloud-python",
"url": "https://github.com/GoogleCloudPlatform/google-cloud-python/issues/5164",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
179334087 | Move translate code into a subpackage
This PR was created via: https://gist.github.com/dhermes/e239691aa584bd56a5352e34aad27cf3
export PROJECT_DIR="${HOME}/google-cloud-python"
export READMES_DIR="${HOME}/i-wrote-some-readmes-for-2357"
cd ${PROJECT_DIR}
git worktree add -b make-translate-subpackage ../hotfix official/master
python make_commits.py \
--git-root "${PROJECT_DIR}/../hotfix" \
--package translate \
--package-name "Google Translate" \
--readme "${READMES_DIR}/translate/README.rst"
Rebased after #2433. Green build
| gharchive/pull-request | 2016-09-26T20:52:02 | 2025-04-01T04:32:35.475329 | {
"authors": [
"dhermes"
],
"repo": "GoogleCloudPlatform/google-cloud-python",
"url": "https://github.com/GoogleCloudPlatform/google-cloud-python/pull/2432",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
253001080 | Support templateSuffix option in Tabledata.insertAll
The templateSuffix option in Tabledata.insertAll is currently marked [Experimental], but should the template tables feature be supported for Beta/GA?
See #1635 for another issue related to table creation while streaming data.
/cc @tswast
I'm checking with the backend folks. I need to figure out how template tables interact with writing to partitions.
Template tables are a low priority feature. Not required for GA by the client libraries. They are mostly obsolete now that tables can be partitioned.
Closing, based on the comment above that this feature is now mostly obsolete.
| gharchive/issue | 2017-08-25T19:47:38 | 2025-04-01T04:32:35.477793 | {
"authors": [
"quartzmo",
"tswast"
],
"repo": "GoogleCloudPlatform/google-cloud-ruby",
"url": "https://github.com/GoogleCloudPlatform/google-cloud-ruby/issues/1692",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
359219557 | Use Rake::TestTask for running tests
Use Rake::TestTask to ensure tests run when the task is invoked.
This fixes a problem where a task can invoke multiple sub-tasks,
but the test files won't run until the parent task completes.
@dazuma, I expect this change will also require some changes to the synth.py files. Unfortunately, this is needed to get the build back to passing. (This, or reverting #2420.)
Ouch. Okay, I think I'll just remove Rakefile updating outright from all the synth scripts.
Things will get better when we have more control over the generator, right? :)
okay, synth scripts updated.
| gharchive/pull-request | 2018-09-11T20:53:17 | 2025-04-01T04:32:35.480113 | {
"authors": [
"blowmage",
"dazuma"
],
"repo": "GoogleCloudPlatform/google-cloud-ruby",
"url": "https://github.com/GoogleCloudPlatform/google-cloud-ruby/pull/2426",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
238277631 | Add new repo
Add a "Add Repo" button to current clone window
Add the CsrAddRepoWindow,
Add unit tests for regex repo name check only. More tests will be added later
Codecov Report
Merging #735 into m_csr will increase coverage by 0.3%.
The diff coverage is 43.2%.
@@ Coverage Diff @@
## m_csr #735 +/- ##
=======================================
+ Coverage 9.56% 9.87% +0.3%
=======================================
Files 485 489 +4
Lines 11845 11914 +69
=======================================
+ Hits 1133 1176 +43
- Misses 10712 10738 +26
Impacted Files
Coverage Δ
...udSourceRepositories/CsrCloneWindowContent.xaml.cs
0% <ø> (ø)
:arrow_up:
...CloudSourceRepositories/CsrCloneWindowContent.xaml
0% <ø> (ø)
:arrow_up:
...oogleCloudExtension/UserPrompt/UserPromptWindow.cs
52.94% <ø> (ø)
:arrow_up:
...SourceRepositories/CsrAddRepoWindowContent.xaml.cs
0% <0%> (ø)
...CloudSourceRepositories/CsrCloneWindowViewModel.cs
0% <0%> (ø)
:arrow_up:
...nsion/CloudSourceRepositories/CsrReposViewModel.cs
0% <0%> (ø)
:arrow_up:
...oudSourceRepositories/CsrAddRepoWindowContent.xaml
0% <0%> (ø)
...ension/Utils/Validation/ValidatingViewModelBase.cs
100% <100%> (+2.85%)
:arrow_up:
...tension/CloudSourceRepositories/CsAddRepoWindow.cs
11.11% <11.11%> (ø)
...oudSourceRepositories/CsrAddRepoWindowViewModel.cs
64.28% <64.28%> (ø)
... and 8 more
Continue to review full report at Codecov.
Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 3b21766...3f3c252. Read the comment docs.
Updated, please take a look again.
Updated. Please take a look.
Thanks.
| gharchive/pull-request | 2017-06-23T23:50:28 | 2025-04-01T04:32:35.497090 | {
"authors": [
"Deren-Liao",
"codecov-io"
],
"repo": "GoogleCloudPlatform/google-cloud-visualstudio",
"url": "https://github.com/GoogleCloudPlatform/google-cloud-visualstudio/pull/735",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
549115377 | storage: separate each sample into a separate file
Each sample should be in a standalone file. This workitem is to track that work.
@JesseLovelace I believe you already are doing this as part of a broader effort.
This could be expanded to more generally adjust samples to follow the Sample Format Style Guide , which includes guidance on separate files.
| gharchive/issue | 2020-01-13T18:49:04 | 2025-04-01T04:32:35.501148 | {
"authors": [
"crwilcox",
"kurtisvg"
],
"repo": "GoogleCloudPlatform/java-docs-samples",
"url": "https://github.com/GoogleCloudPlatform/java-docs-samples/issues/1945",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
90425403 | Label selector query parameter is labelSelector instead of label-selector?
Both api.md and labels.md says the query parameter is label-selector
https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/api.md
https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/labels.md
But it seems like it should be labelSelector
https://github.com/GoogleCloudPlatform/kubernetes/blob/master/api/swagger-spec/v1beta3.json
https://github.com/GoogleCloudPlatform/kubernetes/blob/master/api/swagger-spec/v1.json
That looks like a bug in the documentation. PR sent to fix.
| gharchive/issue | 2015-06-23T15:52:57 | 2025-04-01T04:32:35.509514 | {
"authors": [
"jlowdermilk",
"saturnism"
],
"repo": "GoogleCloudPlatform/kubernetes",
"url": "https://github.com/GoogleCloudPlatform/kubernetes/issues/10229",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
40769920 | Require Service.ContainerPort
The fallback for now is to use Container[0].Port[0] which is, IMO, confusing.
+1 to the idea of working with ports only as a set.
One thing I'll add, making it explicit and requiring ContainerPort to be set makes it much easier to write a UI that presents how service align with ports in pods.
ContainerPort isn't the greatest name. I like TargetPodPort the best but it is a bit verbose.
Assigning to @thockin since he says his refactor will cover the rename.
xref https://github.com/mesosphere/kubernetes-mesos/issues/59
I'm going to close this issue as:
Port has been renamed to ContainerPort
We now have improved default behavior, and attempts to implement a (more) complete solution were rejected by @thockin
Re-open if you have any concerns.
| gharchive/issue | 2014-08-21T03:41:23 | 2025-04-01T04:32:35.512801 | {
"authors": [
"brendandburns",
"jdef",
"pmorie",
"thockin"
],
"repo": "GoogleCloudPlatform/kubernetes",
"url": "https://github.com/GoogleCloudPlatform/kubernetes/issues/983",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
74004144 | WIP: Security Policy
I'd like to get some feedback to lock down the interactions with how the service account defines and enforces policy which results in a security context on the pod. This contains some types that @pmorie and I came up with that defines a SecurityPolicy object that can be configured with constraints as well as strategies to help enforce and create a security context.
This PR is based on @liggitt 's service account PR, only the last commit contains new code.
@smarterclayton @erictune @pmorie @liggitt - PTAL and see if this is the approach to move forward. If this is ok then I'll update the design docs with use cases and types.
edit: I've distilled this down to just the design to make it easier.
@bgrant0607 @erictune
reorganized the commits to be in a more sane format for reviewing
@erictune @smarterclayton @pmorie @deads2k @liggitt - I've distilled this down to just the design to gather feedback on the proposal and types. Please take a look.
@pweil- This needs rebase now that service accounts are in.
Updated
@deads2k @pmorie @liggitt this has been updated to reflect our discussions yesterday. Please take a look
@liggitt - I think I've addressed your feedback. PTAL, particularly around admission. Since we agree that the strategies belong on the SCC I believe that unioning of permissions is not a requirement. The allocator must find an exact match in order to know what strategies to use (which would in turn look for pre-allocated values).
Real world example is SCC A that allows priv == true and RunAsAny user strat and SCC B that has priv == false and MustRunAsRange user strat. We need to find exactly which SCC supports the pod.Privileged == true setting in order to validate the existing fields and generate the unset fields so I don't think a union buys us anything.
That said, it will be an administration task to ensure that you cannot have multiple SCCs that support a request. The allocator would be dumb and just pick the first match. I believe that this can be avoided by setting the values of pod.container.securitycontext to ensure it runs with the right SCC and is only a problem when it comes to setting some, but not all, of the security context fields and letting the allocator pick how to generate the rest.
@pmorie: de-nitted, ptal
@pweil- Are you going to add treatment for pods that come from a non-apiserver config source (mirror pods) ?
For the record your delousing operation was a success.
@pweil- Are you going to add treatment for pods that come from a non-apiserver config source (mirror pods) ?
That is something I'm still a bit unclear on. In some of the other conversations, the thought was that if a manifest based pod is created then the creator has administrative privileges and we should trust the security context on the pod definition. I think the disconnect, for me, is the behavior of the system if the mirror pod subsequently fails admission. Open to suggestions on this topic.
The kubelet uses an authenticated client to submit mirror pods to the API. As long as that client's credentials identify the kubelet as a user that can submit privileged pods, admission should succeed.
The main thing I've run into that you need to be careful about is modifying the spec of mirror pods in an admission controller. I think that would require the security context of a mirror pod to be fully specified.
If the kubelet is running without an API connection, no mirror pods are created.
@pmorie update based on your most recent feedback.
Added a new commit for types and storage implementation
@liggitt @pweil- we discussed adding a default security context for mirror
pods -- doesn't that allow you to specify?
On Mon, May 18, 2015 at 10:45 PM, Paul Weil notifications@github.com
wrote:
@pmorie https://github.com/pmorie update based on your most recent
feedback.
Added a new commit for types and storage implementation
—
Reply to this email directly or view it on GitHub
https://github.com/GoogleCloudPlatform/kubernetes/pull/7893#issuecomment-103309834
.
@liggitt @pweil- we discussed adding a default security context for mirror
pods -- doesn't that allow you to specify?
Having an SCC on the kubelet would be fine and allow the kubelet to generate an SC if not specified. However, that SCC must not use any strategy that tries to look up pre-allocated UIDs, etc since it may not be running with an api connection. It's basically an admin SCC with RunAsAny.
It's basically an admin SCC with RunAsAny
Which makes sense, given where the static pods came from
Also, to speak with precision, "static pods" are the pods that come from a source manifest file on the kubelet. "mirror pods" are exact replicas of those static pods which the kubelet submits to the API (if the kubelet has an API connection), presumably to help inform the scheduler about things that are consuming resources on the node.
added more implementation details, interfaces for the allocator api, strategies, and updates to the design to support the strategy options.
UC Example 1: the MustRunAs user strategy can support both pre-allocated and static uids by allowing the options to configure an annotation or a static id. This supports the use case of running all SAs with a unique id which can be pre-allocated and assigned.
UC Example 2: running as an arbitrary id is supported by the RunAsAny strategy which returns whatever is configured in the pod's SC or nil so the image (or default) user will be used.
bump for some eyeballs on the type updates and code (particularly the must run as user strategy that would support pre-allocated UIDS)
What has two eyeballs and a burning need for security context constraints? THIS GUY
rebased, PTAL @pmorie
I asked a bunch of questions and suggested some alternatives. The motivation behind my questions was a couple of things:
I want it to be easier for customized kubernetes implementations to plug in their own policy language (e.g. hypothetical AWS IAM extensions for Kubernetes). this means:
try to keep generic authorization for doing things to opaque objects fairly separate from rules for specific fields in objects (I think having a SecurityContextConstraints object does this)
try to keep service account concept (which various existing auth frameworks might already have) separate from pod specifics (hence my question about why those are entangled
a realization that we will need something quite similar to this for services (#8723) and trying to find avoid a excessive duplication of workflow for services.
ongoing conversations with my company's authz people about lessons learned from our current system, and desire to apply those. Much more conversation expected along these lines when I review the PR for Policy.
Updated the design based on the feedback that is not still under discussion.
I'd like us to treat the following things in a consistent way:
core authorization: Role, PolicyRule, etc (https://github.com/openshift/origin/blob/master/pkg/authorization/api/v1/types.go)
authorizing creation and use of SecurityContexts (this PR: #7893)
authorizing use of specific fields in Services #8723
authorizing use of hostDir #7925
authorizing use of a secret from a pod #4957
authorizing use of a secret from an ImagePullSecret or ServiceAccount
authorizing use of a TemplateRef in a ReplicationControllerSpec
other things that may come up
I don't think we have that yet, and I don't think we can get there before the pre-v1 code freeze. So, I think this PR needs to be on hold until after v1.
Ok. We'll carry this in OpenShift and circle back.
On May 29, 2015, at 2:14 PM, Eric Tune notifications@github.com wrote:
I'd like us to treat the following things in a consistent way:
core authorization: Role, PolicyRule, etc (https://github.com/openshift/origin/blob/master/pkg/authorization/api/v1/types.go)
authorizing creation and use of SecurityContexts (this PR: #7893)
authorizing use of specific fields in Services #8723
authorizing use of hostDir #7925
authorizing use of a secret from a pod #4957
authorizing use of a secret from an ImagePullSecret or ServiceAccount
authorizing use of a TemplateRef in a ReplicationControllerSpec
other things that may come up
I don't think we have that yet, and I don't think we can get there before the pre-v1 code freeze. So, I think this PR needs to be on hold until after v1.
—
Reply to this email directly or view it on GitHub.
https://github.com/openshift/origin/pull/2856
ok to test
@pweil- please tag me when this rebased.
@deads2k - rebased to 96828f203c8d960bb7a5ad649d1f38f77ae8910f as requested
@deads2k - rebased to 96828f2 as requested
Got it. Thanks.
ok to test
On Wed, Jul 1, 2015 at 10:35 PM, Kubernetes Bot notifications@github.com
wrote:
Can one of the admins verify that this patch is reasonable to test? (reply
"ok to test", or if you trust the user, reply "add to whitelist")
If this message is too spammy, please complain to ixdy.
—
Reply to this email directly or view it on GitHub
https://github.com/GoogleCloudPlatform/kubernetes/pull/7893#issuecomment-117879175
.
@pweil- The useHostDir flag needs to be renamed to useHostPath...
needs rebase.
rebased.
I'd like us to treat the following things in a consistent way:
core authorization: Role, PolicyRule, etc (https://github.com/openshift/origin/blob/master/pkg/authorization/api/v1/types.go)
authorizing creation and use of SecurityContexts (this PR: #7893)
authorizing use of specific fields in Services #8723
authorizing use of hostDir #7925
authorizing use of a secret from a pod #4957
authorizing use of a secret from an ImagePullSecret or ServiceAccount
authorizing use of a TemplateRef in a ReplicationControllerSpec
other things that may come up
@pmorie I'd like to come to an agreement on the base structures of the objects that constrain policy. I think the key missing item is how it would integrate with the Role/Policy Rule/etc objects. The authorization components are pretty simple after that (as an admission rule it is easy to allow/disallow things based on the settings) and the object can be made to have different sections about authorized N types of api objects, not just pods/containers.
@pmorie give some love to @pweil- here? Where do we stand?
I'm on vacation this week (:house: :mouse:) and will take a look next week.
Somewhere in this or a related thread, I said we should not merge this issue until we figured out a grand unified theory of policy that would encompass both the SecurityContextConstraint feature and the Policy/RoleBinding features into one object. I haven't come up with a way to do that. So, I think it is time to unblock this PR.
Also, @mikedanese convinced me that the two-phase approach is easier for users to reason about and has a good track record at our company.
By two-phased approach, I mean:
authorizer module checks if user is authorized to create/update objects of type X in this namespace at all.
an admission controller checks the namespace to find the exactly one SecurityContextConstraint for that namespace, or the default one for the cluster. It rejects changes to objects that have specific disallowed fields.
I realize this was the original proposal and that I previously pooh-pooh-ed it. I am un-pooh-pooh-ing it. :hankey:
One thing @mikedanese and I talked about is whether the SecurityContextController is only meant to control allowed values in the pod.spec.securityContext. If so, then what will the objects be called that control what values are allowed for pod.spec.volumes[].hostDir, or pod.spec.ports[].hostPort.
Is multiple objects better or a single object. e.g.
scc SecurityContextConstraints
allowHostPort bool
allowHostDir bool
...
@mikedanese you said you had time to review this. Assigning to you.
I would really prefer if this was PodSecurityPolicy and was comprehensive
(as much as possible).
On Sep 10, 2015, at 7:07 PM, Eric Tune notifications@github.com wrote:
One thing @mikedanese https://github.com/mikedanese and I talked about is
whether the SecurityContextController is only meant to control allowed
values in the pod.spec.securityContext. If so, then what will the objects
be called that control what values are allowed for
pod.spec.volumes[].hostDir, or pod.spec.ports[].hostPort.
Is multiple objects better or a single object. e.g.
scc SecurityContextConstraints
allowHostPort bool
allowHostDir bool
...
—
Reply to this email directly or view it on GitHub
<https://github.com/kubernetes/kubernetes/pull/7893#issuecomment-139407295>.
I would really prefer if this was PodSecurityPolicy and was comprehensive
Yes, I'd like to get this renamed in this PR. I think it more accurately reflects what we're trying to achieve and extends well to other objects that we want to control in the same pattern that don't have a SecurityContext (like services). I have completed the work in downstream but need to roll it up here. I'll include the changes for the other comments as well.
We don't want to delete non-conforming objects after an SCC is changed. And we need to be able to change them, particularly to allow adding SCCs to pre-existing clusters. So, we should tell users whether objects created prior to creation/modification of the SCC conform to it.
Therefore, suggest having both SecurityContextConstraints.Spec and a SecurityContextConstraints.Status.
The Spec would have all the things that are in the object now.
The Status would have at least a Condition. The Condition would be Status True when all the objects that the SCC managed would pass admission control under its current Spec, otherwise False or Unknown.
Re-evaluating compliance requires having all the inputs. If the context user/groups is one of the inputs, that would need to be recorded for later re-evaluation
@mikedanese - rebased, renamed to PodSecurityPolicy, removed a lot of the extra naming on the api objects.
Couple comments on the API changes:
Please break the top level object into a Spec and Status per api convetions. It's okay if an object only has a spec (see limit range)
we should put this in the experimental API to soak
@mikedanese @smarterclayton @liggitt - thanks for all the feedback so far. Here's a summary based on our convo up to this point for things that are addressed in the round1 feedback commit or addressed in comment history:
host path specific validation - punting on this until later iterations https://github.com/kubernetes/kubernetes/pull/7893#discussion_r39240825
break up into spec and status - https://github.com/kubernetes/kubernetes/pull/7893#issuecomment-140167181
break up and reorg constant blocks - https://github.com/kubernetes/kubernetes/pull/7893#discussion_r39451945
rename strategy types - https://github.com/kubernetes/kubernetes/pull/7893#discussion_r39452008
update godoc - https://github.com/kubernetes/kubernetes/pull/7893#discussion_r39539575
move to experimental api - https://github.com/kubernetes/kubernetes/pull/7893#issuecomment-140167181
support port ranges - https://github.com/kubernetes/kubernetes/pull/7893#discussion_r39418844
Incomplete
validation for host port ranges in the provider - pending approval of new api objects
if it would help this along I can remove the provider and strategy implementations so this is just api and submit the provider logic separately.
if it would help this along I can remove the provider and strategy implementations so this is just api and submit the provider logic separately.
+1 that makes sense to me. Can we put this in the experimental API? I'm going to continue reviewing this after v1.1 is cut (monday). I'd really like to get this in as experimental for v1.2.
+1 that makes sense to me. Can we put this in the experimental API? I'm going to continue reviewing this after v1.1 is cut (monday). I'd really like to get this in as experimental for v1.2.
The current iteration has this in experimental. +1 to targeting v1.2. Thanks for the update
@mikedanese - updated to be api only
What is the plan for restricting use of volume plugins? Does that belong in PodSecurityPolicy?
Volumes are in Pods.
For example, if you ask a kubelet to mount a fiber channel volume, it'll do that, as root. Not necessarily something you want all users to be able to do, but maybe certain users and/or namespaces can access certain volumes.
Fine if the answer is "probably belongs in PodSecurityPolicy but not in this PR".
@pmorie @rootfs
For object level policy in Openshift, the a set of allowed actions -- a Role -- is a separate type from the list of people who can act in that role -- a RoleBinding. This allows reuse of Roles, I gather.
In PodSecurityPolicy, the two parts are in one object. Everyone okay with that discrepancy?
That was the desire.
On Sep 30, 2015, at 10:24 AM, Eric Tune notifications@github.com wrote:
What is the plan for restricting use of volume plugins? Does that belong in
PodSecurityPolicy?
Volumes are in Pods.
For example, if you ask a kubelet to mount a fiber channel volume, it'll do
that, as root. Not necessarily something you want all users to be able to
do, but maybe certain users and/or namespaces can access certain volumes.
Fine if the answer is "probably belongs in PodSecurityPolicy but not in
this PR".
@pmorie https://github.com/pmorie @rootfs https://github.com/rootfs
—
Reply to this email directly or view it on GitHub
https://github.com/kubernetes/kubernetes/pull/7893#issuecomment-144325244.
We went back and forth on this - part of the role binding argument was that
end users (project admins) would be the ones who can bind, but cluster
admins are the ones who define roles. Down the road this needs to be more
of a capability model - I have the capability to run a pod with privileged
and the capability to give rights to others, therefore I can add another
user to the PSP. It also matches some of the discussions we had a long
time ago about pod templates (I own the template, I identify who can edit
it).
So in the short term it seemed parsimonious to model this as a unified item
because it is very much a cluster admin role and it is the most succinct
way to encapsulate the problem today.
On Sep 30, 2015, at 10:33 AM, Eric Tune notifications@github.com wrote:
For object level policy in Openshift, the a set of allowed actions -- a Role
https://github.com/openshift/origin/blob/70015e4a6f9b821ccd5a01134054286e3813b4eb/pkg/authorization/api/v1/types.go#L39
-- is a separate type from the list of people who can act in that role -- a
RoleBinding
https://github.com/openshift/origin/blob/70015e4a6f9b821ccd5a01134054286e3813b4eb/pkg/authorization/api/v1/types.go#L50.
This allows reuse of Roles, I gather.
In PodSecurityPolicy, the two parts are in one object. Everyone okay with
that discrepancy?
—
Reply to this email directly or view it on GitHub
https://github.com/kubernetes/kubernetes/pull/7893#issuecomment-144327384.
What is the plan for restricting use of volume plugins? Does that belong in PodSecurityPolicy?
Volumes are in Pods.
For example, if you ask a kubelet to mount a fiber channel volume, it'll do that, as root. Not necessarily something you want all users to be able to do, but maybe certain users and/or namespaces can access certain volumes.
Fine if the answer is "probably belongs in PodSecurityPolicy but not in this PR".
Yes, it belongs here. I think having a slice of allowed types could probably be sufficient for this. I'll add it in.
PR is updated. Rather than a slice I went with a struct. I didn't see the benefit of a slice since volume sources are not typed with a string. This also gives room to expand volume security policies if we need it without adding an additional object. PTAL
@erictune
For example, if you ask a kubelet to mount a fiber channel volume, it'll do that, as root. Not necessarily something you want all users to be able to do, but maybe certain users and/or namespaces can access certain volumes.
Fine if the answer is "probably belongs in PodSecurityPolicy but not in this PR".
Bingo.
Bingo
https://github.com/pweil-/kubernetes/blob/security-policy/pkg/apis/experimental/types.go#L561
I missed that that was added. My bad!
reminder: host volume plugin locking down based on directories
https://github.com/kubernetes/kubernetes/pull/13524#issuecomment-153414962
We discussed that in https://github.com/kubernetes/kubernetes/pull/7893#discussion_r39240825 and https://github.com/kubernetes/kubernetes/pull/7893#discussion_r39418754
reminder: host volume plugin locking down based on directories
We should address this before PodSecurityPolicy leaves alpha
We should address this before PodSecurityPolicy leaves alpha
Agree. I should have time to make it back to this PR when our current release is wrapped up.
/cc @sttts
@pweil- what's the timing on this?
I mostly like this PR, but I would like it if PodSecurityPolicy (PSP) did not contain the list of users and groups that can use the PSP. I would rather if the Authorization Policy (openshift or otherwise) made the decision of which users/groups can use which PSPs. I wrote up how I think that would work in a new issue: #17637.
@pweil- what's the timing on this?
Back from PTO today. This is tops on my list after some administrative items. Will review #17637 and rebase.
Is there some way we can combine this with #18262 ?
Is there some way we can combine this with #18262 ?
I started reading through your PR yesterday. I think there are enough similarities to at least identify where things may overlap and consider combining if possible. I think the main difference that I have seen so far is in the way the policy is determined to be applicable to the submitted resource.
In this proposal you are granted access to specific security policies and may end up using 0, 1, or none of them. This may still be applicable using selectors as #18262 proposes assuming the selector takes user information into account but I need to think about it a bit more.
@mikedanese - please take a look at the api commit again. I've updated to use a slice in the volume section and added a binding api object to decouple the users and groups from the policy object. Let me know what you think. If that looks good I'll round out the storage for bindings so this can get moving.
@davidopp @pweil-
PodSecurityPolicy is about saying no to pods that set fields to values that are not allowed. I couldn't tell from the PodPolicy proposal if or when a user would be prohibited by PodPolicy from using a particular label or QoS. So, they seem different.
Although we have realized and implemented that PSP should also be
capable of defaulting prior to rejection (specifically around
runAsUser which can be delegated to docker but which is considered
unsafe).
Although we have realized and implemented that PSP should also be
capable of defaulting prior to rejection (specifically around
runAsUser which can be delegated to docker but which is considered
unsafe).
Yes, I think the best bet here is to get the api types in and we can decide on the admission/strategy implementation how we want defaulting to work. In our OpenShift implementation the strategies provide defaulting values, we may decide to use that or go with PodPolicy to provide it and leave PSP as a validation mechanism only.
@mikedanese - please take a look at the api commit again. I've updated to use a slice in the volume section and added a binding api object to decouple the users and groups from the policy object. Let me know what you think. If that looks good I'll round out the storage for bindings so this can get moving.
@mikedanese checking in on this to see if you've had a chance to glance at the binding object
@pweil-, I spent some time discussing how these could be bound to users/groups with @erictune, @lavalamp, and @deads2k today. We may have a way to avoid a second binding object, and instead make use of the subjectaccessreview/resourceaccessreview mechanism @deads2k is working on upstreaming. Will touch base tomorrow.
Agree with what Jordan just said.
I have an idea on how we can break this into smaller steps.
Step 1a: (pweil-) commit PSP type definition, as a non-namespaced object
(like ClusterPolicy)
Step 1b: (pweil-) commit an admission controller that evaluates all Pods
against the cluster-level PSPs. (No differences across namespaces)
Step 1c: (erictune) advise users about the need to create one or more PSPs
before enabling this new admission controller.
At this point, Kubernetes users now have the ability to set allowed PSPs on
a cluster basis. This is an improvement over before.
Step 2a: (Eric, David, Jordan, pweil-) nail down the mechanism that we
talked to today, and which Jordan just alluded to. This means adding
can-use to all supported Authorizer modules.
Step 2b: (tbd) add a second PSP admission controller to check whether a
namespace, or users/groups in that namespace can use a PSP, using 2a.
Step 2c: (tbd) advise users about need to add appropriate can-use policies
to all roles/namespaces before upgrading from old Admission controller (1b)
to this new admission controller (2b).
At this point, Kubernetes users can now control PSP usage on a per user,
per group, per role and/or per namespace basis.
On Thu, Jan 7, 2016 at 6:34 PM, Jordan Liggitt notifications@github.com
wrote:
@pweil- https://github.com/pweil-, I spent some time discussing how
these could be bound to users/groups with @erictune
https://github.com/erictune, @lavalamp https://github.com/lavalamp,
and @deads2k https://github.com/deads2k today. We may have a way to
avoid a second binding object, and instead make use of the
subjectaccessreview/resourceaccessreview mechanism @deads2k
https://github.com/deads2k is working on upstreaming. Will touch base
tomorrow.
—
Reply to this email directly or view it on GitHub
https://github.com/kubernetes/kubernetes/pull/7893#issuecomment-169872400
.
A couple of easy-to-address comments, then this looks good to go in.
Cleaned up validation and type defintions
Reduced UID strategy options based on our experiences with FSGroup and SupplementalGroups
Basically as long as you can define n ranges there is no need for a specific "must run as X id" - just make a range that starts and ends with the same value
Of note, in recent discussions we have found we needed more than what the allowed capabilities slice will provide. There seems to be 3 items than need covered
caps you're allowed to add but not defaulted
caps that you get by default but can drop explicitly
caps you must drop
I can either update this to include those fields (and others we have added since this proposal started) or remove caps from this proposal and do it as a follow up so we can get the mechanics in with less complexity. Thoughts?
@pmorie - any opinion on the comments above? I'm going to begin work on the admission controller piece
@erictune ready for your eyes again
rebased. @erictune let me know if anything is holding this up for the api type going in. Thanks!
lgtm.
feel free to reapply or have someone reapply lgtm after you rebase.
@bgrant0607 just said this has to go into extensions/v1beta1 for at least a release before it can go into legacy api group and v1.
Oh, duh, it is in extensions/v1beta1. Please ignore last comment.
:-) whew, thought there was some new api piece I missed. Will rebase tonight. tyvm!
IIUC, when this PR goes in, we have the type and endpoint, but no actual enforcement.
Is that enforcement going to come in soon, so it gets into 1.2? If not, thats understandable, but we should disable this type by default if it does not do anything.
not likely by tomorrow. I will disable it for now and add enabling it by default in the next PR with the admission controller
SGTM
Special award for longest-lived PR that eventually merged!
Almost 9 months old! This PR can crawl and eat solid food! :baby: :baby_symbol:
:smiley: rofl
How much work is it to write an admission controller that looks at PSPs but treats all users and groups the same?
not a ton. Can probably be done by the end of next week based on my other commitments. It would include:
admission controller
strategy implementations
provider implementation
resource enablement
Most of this is just a straight migration from what is already in OpenShift and whittling it down to be less specific.
Aaand it broke the build:
07:24:01 /workspace/kubernetes/docs/api-reference is out of date. Please run hack/update-api-reference-docs.sh
07:24:01 FAILED
I'll send a fix in a second.
thanks @gmarek - hrm I was running update-all/verify-all after every rebase I'm surprised it broke. Apologies
Np:) It was tempting to revert this PR, just to make it hang there a little longer ;)
It was tempting to revert this PR, just to make it hang there a little longer ;)
you have no idea how glad I am that you didn't. :beers:
@gmarek - crap, this does need reverted. It does not have my changes for disabling the resource by default and the comment updates that were requested. New PR or revert and fix this one?
https://github.com/kubernetes/kubernetes/commit/3cfb090939c06228feab5ed07e2d4e49b0489065#diff-c47934bf31679532191ed2b519d74399R591
https://github.com/kubernetes/kubernetes/pull/20721
| gharchive/pull-request | 2015-05-07T14:22:53 | 2025-04-01T04:32:35.593082 | {
"authors": [
"davidopp",
"deads2k",
"erictune",
"gmarek",
"jdef",
"liggitt",
"mikedanese",
"mrry550",
"pmorie",
"pweil-",
"smarterclayton"
],
"repo": "GoogleCloudPlatform/kubernetes",
"url": "https://github.com/GoogleCloudPlatform/kubernetes/pull/7893",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
77128714 | Run tests for third-party code
The deep_equal_test is not being run.
@zmerlynn Am I missing something?
Also maybe needed for https://github.com/GoogleCloudPlatform/kubernetes/pull/8858
@bgrant0607 i feel like we should figure this out for 1.0
LGTM.
cc @ixdy @lavalamp
Does this run the tests in third_party? The original assumption was that third_party was for non-Go code.
And for code that was tested by someone else.
Maybe we should just run third_party/golang?
Yeah... I'm all for testing everything, but we have enough fish to fry maintaining our own tests?
@zmerlynn The issue is that we've forked some code in third_party/golang, so we really should be testing it.
Correct, this is to run tests for forked code.
On Thu, May 28, 2015 at 12:23 AM Brian Grant notifications@github.com
wrote:
@zmerlynn https://github.com/zmerlynn The issue is that we've forked
some code in third_party/golang, so we really should be testing it.
—
Reply to this email directly or view it on GitHub
https://github.com/GoogleCloudPlatform/kubernetes/pull/8388#issuecomment-106166265
.
I'm worried that this raises the barrier to entry into third_party and would potentially force us to fork code bases we haven't already forked in order to fix flaky tests in them. We have a third_party/forked dir, can we restrict to that, possibly after reorg?
@zmerlynn
I'm worried that this raises the barrier to entry into third_party and would potentially force us to fork code bases we haven't already forked in order to fix flaky tests in them. We have a third_party/forked dir, can we restrict to that, possibly after reorg?
I'm not sure I see how this raises the barrier; what am I missing?
Am I misreading something or isn't it now including all of third_party? Sorry, bouncing between between things. So if someone wants to put package foo in third_party and it happens to have some Go code in it, they have to make sure it has no flakey tests? I guess that seems like an unlikely scenario if we're not also forking the code. Okay, nevermind.
LGTM.
@zmerlynn
So if someone wants to put package foo in third_party and it happens to have some Go code in it, they have to make sure it has no flakey tests?
Correct, if you fork code into third_party, you have to make sure the tests work.
third_party only includes go code that we've modified for some reason. (and other non-go code, but we don't care about that here.) We should really run the tests in it.
go dependencies that we haven't changed are in the Godeps dir, and in theory we don't need to run their unit tests-- whoever last updated the dependency should have verified that it worked.
| gharchive/pull-request | 2015-05-16T20:52:48 | 2025-04-01T04:32:35.605286 | {
"authors": [
"bgrant0607",
"lavalamp",
"pmorie",
"zmerlynn"
],
"repo": "GoogleCloudPlatform/kubernetes",
"url": "https://github.com/GoogleCloudPlatform/kubernetes/pull/8388",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2186843742 | Nina/Add MaxText GPU tests with XPK
Description
Add decode test for Llama2, Mistral, and Gamma for GPU by using end-to-end bash scripts in MaxText repo via XPK.
Tests
Please describe the tests that you ran on Cloud VM to verify changes.
Instruction and/or command lines to reproduce your tests: ...
Upload to Airflow, and test.
List links for your tests (use go/shortn-gen for any internal link): ...
https://540cef12d3da42ce97b48a97401b6c83-dot-us-central1.composer.googleusercontent.com/dags/maxtext_end_to_end/grid?dag_run_id=manual__2024-03-14T18%3A45%3A52.666342%2B00%3A00&task_id=maxtext-stable-test_llama2-h100-80gb-8.run_model._run_workload&tab=logs
Checklist
Before submitting this PR, please make sure (put X in square brackets):
[x] I have performed a self-review of my code.
[x] I have necessary comments in my code, particularly in hard-to-understand areas.
[x] I have run one-shot tests and provided workload links above if applicable.
[x] I have made or will make corresponding changes to the doc if needed.
Thanks Nina!
A few high-level comments/questions:
Is that ok to add title to something like Add MaxText GPU tests with XPK?
Is there any needed permission change in project cloud-ml-auto-solutions? If so, we could add them into the terraform template.
Thanks, @RissyRan, done for point 1.
For point 2, cloud-ml-auto-solutions needs permission to use supercomputer-testing clusters.
It seems the test does not run properly? One example here
I think it is just logging. The job exits normally (https://screenshot.googleplex.com/77oWiuuDoqj2WDo).
Sometimes if the job is sent via xpk successfully, but fails execution, I can see a red dot at wait_for_workload_completion. (example)
It seems the test does not run properly? One example here
I think it is just logging. The job exits normally (https://screenshot.googleplex.com/77oWiuuDoqj2WDo). Sometimes if the job is sent via xpk successfully, but fails execution, I can see a red dot at wait_for_workload_completion. (example)
Interesting! I did not see training logs on GPU.
One example:
TPU logs on Mistral: https://screenshot.googleplex.com/7wrNoL7yN6L8rTH
GPU logs on Mistral: https://screenshot.googleplex.com/8jj9aPQRe2sxebP
Did I miss anything?
cc @jonb377
Good catch @RissyRan, I see the XPK workload is running (cd /deps && bash gpu_multi_process_run.sh) instead of the train script. @NinaCai I think it's a bug in XPK: http://shortn/_nwVkm1F5sA
Good catch @RissyRan, I see the XPK workload is running (cd /deps && bash gpu_multi_process_run.sh) instead of the train script. @NinaCai I think it's a bug in XPK? http://shortn/_nwVkm1F5sA
This is on purpose, it just runs the commands in multi-processes. But the script is not in maxtext master branch, that is why it skips the whole script and just exits. @yangyuwei@google.com
| gharchive/pull-request | 2024-03-14T16:57:03 | 2025-04-01T04:32:35.761143 | {
"authors": [
"NinaCai",
"RissyRan",
"jonb377"
],
"repo": "GoogleCloudPlatform/ml-auto-solutions",
"url": "https://github.com/GoogleCloudPlatform/ml-auto-solutions/pull/199",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2413860812 | refactor,chore(egen) : refactored code according to template guidelines
This PR includes
1.Adds Colab authentication and kernal restart steps.
2.Adds colab enterprise logo with link.
3.Removes IS_TESTING condition.
4.Replace REGION with LOCATION.
5.Removed unused modules
6. Removed UUID code
REQUIRED: Fill out the below checklists or remove if irrelevant
If you are opening a PR for Official Notebooks under the notebooks/official folder, follow this mandatory checklist:
[x] Use the notebook template as a starting point.
[x] Follow the style and grammar rules outlined in the above notebook template.
[x] Verify the notebook runs successfully in Colab since the automated tests cannot guarantee this even when it passes.
[x] Passes all the required automated checks. You can locally test for formatting and linting with these instructions.
[ ] You have consulted with a tech writer to see if tech writer review is necessary. If so, the notebook has been reviewed by a tech writer, and they have approved it.
[ ] This notebook has been added to the CODEOWNERS file under the Official Notebooks section, pointing to the author or the author's team.
[x] The Jupyter notebook cleans up any artifacts it has created (datasets, ML models, endpoints, etc) so as not to eat up unnecessary resources.
/gcbrun
/gcbrun
| gharchive/pull-request | 2024-07-17T15:01:05 | 2025-04-01T04:32:35.782986 | {
"authors": [
"Jayakrishna2801",
"gericdong",
"katiemn"
],
"repo": "GoogleCloudPlatform/vertex-ai-samples",
"url": "https://github.com/GoogleCloudPlatform/vertex-ai-samples/pull/3278",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
836225796 | reference docs: live
Overhaul and audit of all reference docs for live commands.
See #1577 for guidance on command references.
We do want to overhaul the UX for some of the kpt live commands (tracked separately), but the reference docs have been updated to match the current behavior.
| gharchive/issue | 2021-03-19T17:39:06 | 2025-04-01T04:32:35.790629 | {
"authors": [
"frankfarzan",
"mortent"
],
"repo": "GoogleContainerTools/kpt",
"url": "https://github.com/GoogleContainerTools/kpt/issues/1572",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
684891001 | Revisit kpt fn run sinking config
Users are often confused where the kpt fn run command writes newly created configuration as it's not clear that it automatically sinks to the same directory it sourced configs from. Other tools like sed source and manipulate content but print it to console instead of writing to disk.
We should consider switching default behavior to only source and run functions and provide a --sink-dir flag to optionally write out configs.
cc @linde
I agree: this has been reported quite often. Since this will involve breaking change, this makes it a good candidate for 1.0 milestone.
/cc @frankfarzan
kpt v1 supports out-of-place mode in fn render and fn eval. We have improved the docs to explain the default behavior of in-place sink. I haven't seen any reports about the confusion since v1, so marking this closed.
| gharchive/issue | 2020-08-24T18:54:30 | 2025-04-01T04:32:35.793159 | {
"authors": [
"droot",
"prachirp"
],
"repo": "GoogleContainerTools/kpt",
"url": "https://github.com/GoogleContainerTools/kpt/issues/972",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
858178305 | Add tests for invalid UpstreamLock information.
https://github.com/GoogleContainerTools/kpt/issues/1682
Note that there's no error for a bad ref in the UpstreamLock. The error message for a bad dir references missing subpackages.
@mortent OoO next business day so if the proposed test looks good to you, feel free to merge.
| gharchive/pull-request | 2021-04-14T18:56:05 | 2025-04-01T04:32:35.794473 | {
"authors": [
"etefera"
],
"repo": "GoogleContainerTools/kpt",
"url": "https://github.com/GoogleContainerTools/kpt/pull/1753",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
276060600 | Q: Create pagelist for services
I want to create "services" page (With dynamic page list generator).
I follow this tutorial:
http://www.endurojs.com/blog/how-to-make-a-blog-with-endurojs
Works great!! very easy to set blog like this.
Q
If I want to make the same idea for "services' generator - what I should change?
I try to change /assets/hbs_helpers/blog.js to /assets/hbs_helpers/services.js and ctr+f change anytime I find blog change to services + change to this code:
{{#services}}
{{#each this}}
<article>
<h2>{{services_entry.title}}</h2>
<p>{{{services_entry.text}}}</p>
<a href="/services/{{page_slug}}">Read more...</a>
</article>
{{/each}}
{{/services}}
Not work (No output)
Could you just reuse the blog helper to create the services page?
Did you create the generator template(/pages/generators/services.hbs) and some context files(/cms/generators/services/ezras_service.js)?
Yes. What do you mean by saying "reuse the blog helper"
The structure is ok (I have list of services under /cms/generators/services/).
seems to be working for me :-/
Hey, you can log stuff out from the helper and also from the template by doing
{{#services}}
{{log this}}
{{#each this}}
Yes. I find something really weird - this works only when the name is diffrence.
When i change the name of the page list to "services" I dont get any output (wierd but this is whats happen)
Also (please check this) - on version:
1.4.41
When i create blog page - the "folder" icon in the admin dissapear (look like this):
Good (option to add post)
When i change the blog list page to "blog" no "folder-icon" option to add posts by admin:
I will check this again and will add post. Maybe this was cache issue. Thanks
| gharchive/issue | 2017-11-22T13:05:40 | 2025-04-01T04:32:35.830514 | {
"authors": [
"Ezra-Siton-UIX",
"Gottwik"
],
"repo": "Gottwik/Enduro",
"url": "https://github.com/Gottwik/Enduro/issues/182",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
119524019 | jspm node support
This PR includes the necessary jspm configuration for this package to work in Node with the coming jspm 0.17 release, allowing this single package to provide both client and server support.
I completely understand if you'd rather not support this here, but it will simplify user configuration locally having it in one place. There is no maintenance burden either as I would continue to maintain this personally.
Just let me know if you have any questions at all, and thanks for considering.
Strongly leaning away from this in the various modules but if a really solid argument can be made as to why this should be our concern versus jspm just working correctly I could reconsider. Just do not find jspm to be the top contender for client packaging right now over webpack/browserify and thus would expect it to maintain the proper set of needed hacks versus all modules being updated.
Sure, thanks for the consideration, moving this into an internal implementation.
| gharchive/pull-request | 2015-11-30T16:11:44 | 2025-04-01T04:32:35.931736 | {
"authors": [
"defunctzombie",
"guybedford"
],
"repo": "Gozala/events",
"url": "https://github.com/Gozala/events/pull/25",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
1285730923 | Cannot compile for Godot 4.0 - identifier not found
Hello,
I can't get the 4.0 branch to compile at all, I get the following error:
modules\register_module_types.gen.cpp(98): error C3861: 'initialize_godotsteam_module': identifier not found
modules\register_module_types.gem.cpp(246): error C3861: 'uninitialize_godotsteam_module': identifier not found
My SCONS flags scons -j8 platform=windows production=yes tools=yes target=release_debug
I've tried to compile on both Windows with Visual Studio and Linux with MinGW and get the exact same error.
I wish to add some other modules so I unfortunately can't use the pre-compiled version either. 😞
Any help would be greatly appreciated.
Thanks!
Godot 4 Beta released. Now seems likes a good time to start.
We did! A new version of the Godot 4.x module when out with alpha 17, I think. It will get an update for beta.
| gharchive/issue | 2022-06-27T12:26:15 | 2025-04-01T04:32:35.937940 | {
"authors": [
"Anutrix",
"Gramps",
"LuniaDev"
],
"repo": "Gramps/GodotSteam",
"url": "https://github.com/Gramps/GodotSteam/issues/259",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
56972797 | Guice: Use child injector instead of scoping
So Sponge Forge switched from using scoping for plugins to child injectors. It's 10000% easier to understand and use, so should probably be changed by us as well.
If curious, basically right now Guice injects plugins kinda like this:
pluginInstance = clazz.newInstance();
try {
scope.enter(pluginInstance);
// @DefaultConfig, etc injected using this
injector.injectMembers(pluginInstance):
} finally {
scope.exit();
}
With PluginScope, it's kinda confusing. The scope has to be entered so that the bindings work properly.
With child injectors, it becomes more like this:
// 'injector' is the global injector (there is only one)
Injector pluginInjector = injector.createChildInjector(new GranitePluginGuiceModule());
// @DefaultConfig, etc injected by getInstance before returning
pluginInstance = pluginInjector.getInstance(clazz);
thus, no PluginScope or PluginScoped classes are required, and the code to create & inject into plugins is incredibly simple!
I can do this, but just creating an issue so I don't forget if I don't do it soon.
@Hidendra Ping
@Voltasalt Pong
@Hidendra Just wanted to remind you :P
just creating an issue so I don't forget if I don't do it soon.
my memory isn't that short term! :cry:
will look at this this evening
super important pr created >>>> #136
| gharchive/issue | 2015-02-08T23:08:58 | 2025-04-01T04:32:35.970718 | {
"authors": [
"Hidendra",
"Voltasalt"
],
"repo": "GranitePowered/Granite",
"url": "https://github.com/GranitePowered/Granite/issues/132",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2431995775 | added hero section component by shikha jha
done with pushing my files and folders . completed my task. successfully pull request on github repository.
task submitted.
| gharchive/pull-request | 2024-07-26T11:05:00 | 2025-04-01T04:32:36.017847 | {
"authors": [
"shikha-jhaa"
],
"repo": "GrapplTech/GrapplTech-Community-Built-Web-Components",
"url": "https://github.com/GrapplTech/GrapplTech-Community-Built-Web-Components/pull/214",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1267470295 | Translate README_zh.md
I have added Chinese translation of README.md
and also fixed the entrys of other README files.
早上好中国 现在我有冰淇淋 我很喜欢冰淇淋 但是 速度与激情9 比冰淇淋 速度与激情 速度与激情9 我最喜欢 所以…现在是音乐时间 准备 1 2 3 两个礼拜以后 速度与激情9 ×3 不要忘记 不要错过 记得去电影院看速度与激情9 因为非常好电影 动作非常好 差不多一样冰淇淋 再见
| gharchive/pull-request | 2022-06-10T12:21:25 | 2025-04-01T04:32:36.019455 | {
"authors": [
"Yang-qwq",
"nitrog0d"
],
"repo": "Grasscutters/GrassClipper",
"url": "https://github.com/Grasscutters/GrassClipper/pull/95",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2146871366 | GSF IF Meeting Agenda 2024-02-22
Time 10:30am (GMT) / 4:00pm (IST) - See the time in your timezone
Lead – @srini1978 (Microsoft)
Co-Lead @navveenb (Accenture)
Co-Lead – @jawache (GSF)
PM - @jmcook1186 (GSF)
Antitrust Policy
Joint Development Foundation meetings may involve participation by industry competitors, and it is the intention of the Joint Development Foundation to conduct all of its activities in accordance with applicable antitrust and competition laws. It is therefore extremely important that attendees adhere to meeting agendas, and be aware of, and not participate in, any activities that are prohibited under applicable US state, federal or foreign antitrust and competition laws.
If you have questions about these matters, please contact your company counsel or counsel to the Joint Development Foundation, DLA Piper.
Roll Call
Please add 'Attended' to this issue during the meeting to denote attendance.
Any untracked attendees will be added by the GSF team below:
Agenda
[x] Refactor update
[x] IF training workshops
[ ] Hackathon preparations
Issues
[ ] Discuss in-progress/blocked issues on project board: https://github.com/orgs/Green-Software-Foundation/projects/26/views/1
[ ] AOB
Attended
Attended
Attended
Attended
Attended
Attended
Attended
Attended
Closing as complete
| gharchive/issue | 2024-02-21T14:18:53 | 2025-04-01T04:32:36.070938 | {
"authors": [
"SKushwaha1",
"devidayal22",
"jmcook1186",
"manushak",
"narekhovhannisyan",
"perkss",
"rish05",
"srini1978"
],
"repo": "Green-Software-Foundation/if",
"url": "https://github.com/Green-Software-Foundation/if/issues/450",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
197132867 | メッセージがenabled=falseのとき受け取れないのは不都合
mount,unmountなどのシステムからのメッセージが、無効化されたコンポーネントに到達しない現在の仕様は問題なのでは?
現状:
awake は、無条件で呼ばれる。
mountは、awakeが呼ばれてもdisableならよばれず。enabledが変化したときに、再通知される。
unmountも同様。
従って、ツリー上でmountをうけとった後、disableにしてツリーから外してもunmountは呼ばれず。
改善案:
以下のメッセージは特定のタイミングでenabledによらず呼び出される。
awake
初回mountの直前
mount
ツリーに追加された瞬間
unmount
+ツリーから削除された瞬間
dispose
破棄される直前
他に必要なメッセージは?
初回パース時に親からmountされていくが、子のマウントが完了した時点で呼ばれるmountedみたいなやつ
有効無効の変化を通知するonEnable,ondisabled
attribute#watchが、対象のコンポーネントがdisableだったら反応しないようにしたい
以上
| gharchive/issue | 2016-12-22T09:41:45 | 2025-04-01T04:32:36.178278 | {
"authors": [
"moajo"
],
"repo": "GrimoireGL/GrimoireJS",
"url": "https://github.com/GrimoireGL/GrimoireJS/issues/415",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
1538138785 | 🛑 walletcheck API endpoint is down
In c33299e, walletcheck API endpoint (https://grinnode.live:8080/walletcheck/grin1zxwrf5yaxlyps4mpx3n7j9kp4su3gzgpdhfk2sgv56q0prcdlzls9e6e0y) was down:
HTTP code: 0
Response time: 0 ms
Resolved: walletcheck API endpoint is back up in b9505ae.
| gharchive/issue | 2023-01-18T14:55:46 | 2025-04-01T04:32:36.181162 | {
"authors": [
"MCM-Mike"
],
"repo": "Grinnode-live/upptime",
"url": "https://github.com/Grinnode-live/upptime/issues/907",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2555215350 | 🛑 www.groundtruth.co.nz is down
In a6f0269, www.groundtruth.co.nz (https://www.groundtruth.co.nz) was down:
HTTP code: 0
Response time: 0 ms
Resolved: www.groundtruth.co.nz is back up in d96d440 after 25 minutes.
| gharchive/issue | 2024-09-29T23:30:39 | 2025-04-01T04:32:36.186433 | {
"authors": [
"logan12358"
],
"repo": "Groundtruth/upptime-status-page",
"url": "https://github.com/Groundtruth/upptime-status-page/issues/188",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
303217987 | MTF spawned player receive broken radio
Sometimes MTF spawned players receive bad radio. probably game issue
Screenshot: https://media.discordapp.net/attachments/410829313579941889/420999405282394122/unknown.png
seems to be fixed
| gharchive/issue | 2018-03-07T18:54:26 | 2025-04-01T04:32:36.188389 | {
"authors": [
"realmbmc"
],
"repo": "Grover-c13/MultiAdmin",
"url": "https://github.com/Grover-c13/MultiAdmin/issues/134",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1584153999 | 🛑 Fownhost is down
In a13a229, Fownhost (https://fownhost.ga) was down:
HTTP code: 530
Response time: 3321 ms
Resolved: Fownhost is back up in 124bf1b.
| gharchive/issue | 2023-02-14T13:16:43 | 2025-04-01T04:32:36.202406 | {
"authors": [
"Guangsudalao"
],
"repo": "Guangsudalao/uptime",
"url": "https://github.com/Guangsudalao/uptime/issues/157",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
246448920 | Fix minor typo
Added an an where there should be.
renaming skills has to be done manually do i did this one myself
| gharchive/pull-request | 2017-07-28T20:17:39 | 2025-04-01T04:32:36.203927 | {
"authors": [
"brac",
"deadlyicon"
],
"repo": "GuildCrafts/curriculum",
"url": "https://github.com/GuildCrafts/curriculum/pull/169",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2169510464 | Update docs on locale configuration
locale at root was deprecated in #212, and theme/language only works for Material theme.
Relevant implementation:
https://github.com/Guts/mkdocs-rss-plugin/blob/15938261138769919c714e60d4925c58bffa81e1/mkdocs_rss_plugin/util.py#L720-L723
Thanks!
| gharchive/pull-request | 2024-03-05T15:22:04 | 2025-04-01T04:32:36.215716 | {
"authors": [
"Guts",
"YDX-2147483647"
],
"repo": "Guts/mkdocs-rss-plugin",
"url": "https://github.com/Guts/mkdocs-rss-plugin/pull/256",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
437914522 | 多次setData()报错java.lang.NullPointerException
如题,我要通过后台数据多次动态设置tab个数setData(),但是报错java.lang.NullPointerException: Attempt to invoke virtual method 'int android.view.View.getLeft()' on a null object reference
参考 #129 ,你这个可能是适配器问题
@v587jasonzou
我没有搭配Fragment和Viewpager使用,就单独刷新列表数据用的
`private void initTabs() {
mTabEntities.clear();
for (int i = 0; i < titleEntities.size(); i++) {
mTabEntities.add(new TabEntity(titleEntities.get(i).getTitle(), 0, 0));
}
try {
frBjpTl.setTabData(mTabEntities);
frBjpTl.setOnTabSelectListener(new OnTabSelectListener() {
@Override
public void onTabSelect(int position) {
itemSelected = position;
id = titleEntities.get(itemSelected).getId();
getOrderInfo();
}
@Override
public void onTabReselect(int position) {
}
});
}catch (Exception e){
}
}`
@v587jasonzou
我没有搭配Fragment和Viewpager使用,就单独刷新列表数据用的
private void initTabs() {
mTabEntities.clear();
for (int i = 0; i < titleEntities.size(); i++) {
mTabEntities.add(new TabEntity(titleEntities.get(i).getTitle(), 0, 0));
}
try {
frBjpTl.setTabData(mTabEntities);
frBjpTl.setOnTabSelectListener(new OnTabSelectListener() {
@Override
public void onTabSelect(int position) {
itemSelected = position;
id = titleEntities.get(itemSelected).getId();
getOrderInfo();
}
@Override
public void onTabReselect(int position) {
}
});
}catch (Exception e){
}
}
| gharchive/issue | 2019-04-27T07:00:20 | 2025-04-01T04:32:36.228954 | {
"authors": [
"MirZou",
"v587jasonzou"
],
"repo": "H07000223/FlycoTabLayout",
"url": "https://github.com/H07000223/FlycoTabLayout/issues/423",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
1615511610 | 🛑 Unsere Schulkindbetreuung.de is down
In bd7d9b7, Unsere Schulkindbetreuung.de (https://Unsere Schulkindbetreuung.de) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Unsere Schulkindbetreuung.de is back up in e9c86ab.
| gharchive/issue | 2023-03-08T15:56:50 | 2025-04-01T04:32:36.234641 | {
"authors": [
"h2-deploy-20203399221"
],
"repo": "H2-invent/status",
"url": "https://github.com/H2-invent/status/issues/1",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2354690308 | 🛑 Unsere Schulkindbetreuung is down
In 5347a7f, Unsere Schulkindbetreuung (https://unsere-schulkindbetreuung.de) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Unsere Schulkindbetreuung is back up in ccbb716 after 4 minutes.
| gharchive/issue | 2024-06-15T09:22:33 | 2025-04-01T04:32:36.236940 | {
"authors": [
"H2-invent-Bot"
],
"repo": "H2-invent/status",
"url": "https://github.com/H2-invent/status/issues/222",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
451444904 | [Bug] _ indicators are static
Indicators with _ in name are static on chart (visible like horizontal line). For example standard PPO vs standard MACD indicator. PPO is error = static line of one number on chart. MACD is OK - different values.
I updated to https://github.com/H256/gekko with last commit on 2/21/19 . Used before it old version with gekko 0.5.x and worked OK
Hi, @segatrade thanks for your report.
There seems something wrong with emitted strategy-updates in the original gekko repo. I have deployed a fix to my fork that was working on my side. Could you please test by pulling the current development branch again from here: https://github.com/H256/gekko - just give me short feedback if this is resolved by the patch.
(I also updated this version to be based on the current changes inside the original repo)...
Hi, @H256 thanks you very much for your project and reply.
Fix seems is working.
Thank you.
| gharchive/issue | 2019-06-03T12:03:56 | 2025-04-01T04:32:36.239775 | {
"authors": [
"H256",
"segatrade"
],
"repo": "H256/gekko-quasar-ui",
"url": "https://github.com/H256/gekko-quasar-ui/issues/22",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.