added
stringdate
2025-04-01 04:05:38
2025-04-01 07:14:06
created
timestamp[us]date
2001-10-09 16:19:16
2025-01-01 03:51:31
id
stringlengths
4
10
metadata
dict
source
stringclasses
2 values
text
stringlengths
0
1.61M
2025-04-01T04:10:26.512171
2024-06-12T23:06:08
2349816634
{ "authors": [ "dzikowski", "sentientforest" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:14257", "repo": "GalaChain/sdk", "url": "https://github.com/GalaChain/sdk/pull/264" }
gharchive/pull-request
Dry run flag simulates chaincode without writes New feature proposal - add a dryRun: true property to any contract method input that inherits from the ChainCallDTO class. Setting this property to true will execute the full chaincode lifecycle up until the GalaChainStubCache would normally flushWrites(). Instead of writing the results, return the writes, reads, and deletes as a new property on GalaChainResponse: ReadWriteSet. Proposed an alternative approach here: #281 Nice, @dzikowski , looks to me like your alternative is simpler and should be easier to maintain going forward. Glad to see that we can use evaluate here in chaincode in lieu of modifying submit methods to forgo their writes when a flag is provided. Unless anyone else has comments, I think we should close out my pull request and go with your alternative. #281 merged, closing this one. Thanks @sentientforest!
2025-04-01T04:10:26.522622
2024-03-27T16:20:38
2211233581
{ "authors": [ "nicolasburtey", "proofofjogi" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:14258", "repo": "GaloyMoney/galoy-mobile", "url": "https://github.com/GaloyMoney/galoy-mobile/issues/3140" }
gharchive/issue
Quiz stalls Describe the bug Doing the quiz, I answered the question but the next one does not unlock My local app state: Evolution of Money I question 1 is answered, but it says "Earn 2 sats" on the button Question 2 is greyed out earn 2 sats - > To unlock, answer ... When trying to answer question 1 again to move forward, I get an error toast message: "Error quiz question was already claimed" without unlocking question 2 To Reproduce IDK how to reproduce Expected behavior Quiz should continue Smartphone (please complete the following information): Device: iPhone 12 OS: iOS 16.6.1 Blink app <IP_ADDRESS>8 if you try again today, are you able to do the next section? maybe this something we need to clarify, but only one section can be done per day I restarted the app and that resolved it. I cannot say if doing it a day later fixed it though. I did restart not the same day so that might confound the bug
2025-04-01T04:10:26.526304
2016-09-07T10:27:33
175467653
{ "authors": [ "Kavignon" ], "license": "mit", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:14259", "repo": "GameOfLightAndShadows/A-game-of-light-and-shadows", "url": "https://github.com/GameOfLightAndShadows/A-game-of-light-and-shadows/issues/24" }
gharchive/issue
Weapon hit limit In real life, using the same weapon over and over again will make it dull to a certain point where it becomes unusable. With this in mind, weapons will have a usage limit. The strength of the weapon and it's effectiveness should also drop overtime during its usage. Once the uses or the hit limit has reached 0, the weapon will disappear from the inventory and it will be up to the player to acquire a new weapon from the inventory if there are any left.
2025-04-01T04:10:26.560688
2020-10-23T16:45:02
728365334
{ "authors": [ "geekyaditya100", "nanda-mik", "sachinsom93", "shivarajloni" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:14260", "repo": "GameofSource-GFG/Web-Development", "url": "https://github.com/GameofSource-GFG/Web-Development/issues/39" }
gharchive/issue
GOSW17-Create an event table in database Storing the data from event registration form in the database. could you please elaborate it more specifically We will provide credentials to the database. You need to push the data collected from registration form to the database. Which database will be used in this project mongo or sql ?? @sachinsom93 Firestore is used as the database. Can I work on this @nanda-mik ? We will provide credentials to the database. You need to push the data collected from registration form to the database Kindly provide the credentials of database @geekyaditya100 .
2025-04-01T04:10:26.561836
2021-11-08T14:33:15
1047524597
{ "authors": [ "GamerJoep", "PatatjeMC" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:14261", "repo": "GamerJoep/MinetopiaVehicles", "url": "https://github.com/GamerJoep/MinetopiaVehicles/pull/13" }
gharchive/pull-request
Kick fix Fix for a kick that can occur when reentering vehicles to quickly or just reentering on a laggy server. Hey thanks for contributing to MTVehicles but we already partially solved it with those types but not pushed to github yet Not only that was the problem but there were some other small mistakes that I'm still looking for
2025-04-01T04:10:26.593684
2017-03-24T02:04:21
216624078
{ "authors": [ "Gbuomprisco", "omprakash9" ], "license": "mit", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:14262", "repo": "Gbuomprisco/ng2-tag-input", "url": "https://github.com/Gbuomprisco/ng2-tag-input/issues/290" }
gharchive/issue
Issue on upgrading to 0.9.10 Hi, previously when I'm using version 0.7.4. I use to get the Item object from onItemAdded() as shown below , But when I upgraded the version to 0.9.10, I'm getting only two properties ie. ItemId and display value as mentioned below. I need the whole object to be returned similar to 0.7.4 in 0.9.10 version. version 0.7.4: apexUniqueId:"BRAND-1122" confidence:"UNSCORED" core:"BRAND" displayHTML:"Refresh Your car" displayValue:"Refresh Your Car" itemId:1111 pla:"RYC" productLine:"Refresh Your Car" score:1.1094004 version 0.9.10: displayValue:"Refresh Your Car" itemId:1111 Hi @Gbuomprisco , I'm just wondering!, How soon it will get fixed?, because our project need to go in production Hi @omprakash9, sorry - not before the weekend Okay @Gbuomprisco , Please let me know once it get fixed. Thankyou! Hi @omprakash9, if you could pull https://github.com/Gbuomprisco/ng2-tag-input/pull/293 and try it out to see if it fixes you issues (this and IE related), it would be great! I can't test IE/Spark unfortunately @Gbuomprisco , Can you please publish it with new version, So that I will update my plugin and will check it in my local @omprakash9 I'll probably publish under the tag beta, in case there are any bugs Hi @omprakash9, I release under the tag beta, so you need to specify the version (1.0.0 - 🎉 ). Let me know! :) @Gbuomprisco It is now fixed, returning the whole object, Thanks
2025-04-01T04:10:26.596019
2019-07-13T17:30:09
467746689
{ "authors": [ "Geal", "Havvy", "rgiot" ], "license": "mit", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:14263", "repo": "Geal/nom", "url": "https://github.com/Geal/nom/issues/994" }
gharchive/issue
Alt with more than 21 elements In nom 5.0, the alt function is limited to 21 elements. In my use case, I have far more elments that that. What is the best strategy to handle this case without manually generating larger cases ? On my side I may code a function fed with a slice of combinators, but maybe nom deserves such kind of function. Can you put another alt as an element of the alt? @Havvy Indeed I have finally solved my issue with this trick instead of written another function. The issue is that I had to do it manually unfortunately, to implement alt I had to choose an arbitrary limit to the number of parsers. I could raise that number, but there will always be someone who runs into the limit. Meanwhile, alt is associative, so it's easy to combine multiple alt parsers
2025-04-01T04:10:26.614837
2017-12-03T00:13:14
278730327
{ "authors": [ "akhil-geekyants", "christianversloot" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:14264", "repo": "GeekyAnts/NativeBase", "url": "https://github.com/GeekyAnts/NativeBase/issues/1415" }
gharchive/issue
Scrollable Tabs showing weird borders at some tabs Hi there, Unfortunately could not find a solution within your current issue tracker, so opened the one following next. react-native, react and native-base version "native-base": "^2.3.3", "react": "16.0.0", "react-native": "0.50.4" Expected behaviour No borders Actual behaviour Borders when clicking/swiping to some tabs: Steps to reproduce (code snippet or screenshot) Code pretty much equal to demo code, the behaviour also occurs when running the demo code: import React, { Component } from 'react'; import { ScrollableTab, Container, Content, Footer, Header, Title, Button, Left, Right, Body, Icon, Text, Tabs, Tab } from 'native-base'; export default class HeaderExample extends Component<{}> { render() { return ( <Container> <Header hasTabs> <Left> <Button transparent> <Icon name='menu' /> </Button> </Left> <Body> <Title>Actueel onweer</Title> </Body> <Right /> </Header> <Tabs renderTabBar={()=> <ScrollableTab />}> <Tab heading="Onweerradar"> <Text>1</Text> </Tab> <Tab heading="Liveblog"> <Text>2</Text> </Tab> <Tab heading="Waarschuwingen"> <Text>3</Text> </Tab> <Tab heading="Informatie"> <Text>4</Text> </Tab> </Tabs> </Container> ); } } Screenshot of emulator/device Screenshot attached above. Is the bug present in both ios and android or in any one of them? Android tested, iOS not tested. Any other additional info which would help us debug the issue quicker. Negative. Any clues? @akhil-geekyants @christianversloot try setting backgroundColor in tabsContainerStyle of <ScrollableTab/> to the required color for each platform. Like <ScrollableTab tabsContainerStyle={{ backgroundColor: platform == 'ios' ? '#F8F8F8' : '#3F51B5' }} This fixes the problem, thanks 👍 😄
2025-04-01T04:10:26.627151
2023-09-01T07:47:50
1876926270
{ "authors": [ "gemboxsoftware-dev-team", "sunny1028" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:14265", "repo": "GemBoxLtd/GemBox.Document.Examples", "url": "https://github.com/GemBoxLtd/GemBox.Document.Examples/issues/1" }
gharchive/issue
GemBox.Document not working in Docker I apologize for my poor English. I have deployed my .NET Core 7 program in Docker using the mcr.microsoft.com/dotnet/aspnet:7.0 image. The following line of code is causing an error: DocumentModel.Load(wordFileName).Save(outputPdfTemp); The error message is as follows: Unable to load shared library 'libHarfBuzzSharp' or one of its dependencies. In order to help diagnose loading problems, consider using a tool like strace. If you're using glibc, consider setting the LD_DEBUG environment variable: /app/runtimes/linux-x64/native/libHarfBuzzSharp.so: cannot open shared object file: No such file or directory /usr/share/dotnet/shared/Microsoft.NETCore.App/7.0.10/libHarfBuzzSharp.so: cannot open shared object file: No such file or directory /app/libHarfBuzzSharp.so: cannot open shared object file: No such file or directory /app/runtimes/linux-x64/native/liblibHarfBuzzSharp.so: cannot open shared object file: No such file or directory /usr/share/dotnet/shared/Microsoft.NETCore.App/7.0.10/liblibHarfBuzzSharp.so: cannot open shared object file: No such file or directory /app/liblibHarfBuzzSharp.so: cannot open shared object file: No such file or directory /app/runtimes/linux-x64/native/libHarfBuzzSharp: cannot open shared object file: No such file or directory /usr/share/dotnet/shared/Microsoft.NETCore.App/7.0.10/libHarfBuzzSharp: cannot open shared object file: No such file or directory /app/libHarfBuzzSharp: cannot open shared object file: No such file or directory /app/runtimes/linux-x64/native/liblibHarfBuzzSharp: cannot open shared object file: No such file or directory /usr/share/dotnet/shared/Microsoft.NETCore.App/7.0.10/liblibHarfBuzzSharp: cannot open shared object file: No such file or directory /app/liblibHarfBuzzSharp: cannot open shared object file: No such file or directory Hi, Please check the project file of our Docker example: GemBox.Document.Examples/C#/Platforms/Docker/DocumentDocker.csproj I believe the issue you're experiencing occurs because the following package reference is missing in your project: <PackageReference Include="HarfBuzzSharp.NativeAssets.Linux" Version="*" /> Also, please read our Docker example for more information about using GemBox.Document on Docker. I hope this helps. Regards, Mario Nice! It is working fine now. Thank you!
2025-04-01T04:10:26.634914
2017-10-15T05:01:00
265546229
{ "authors": [ "coveralls", "fentie", "mikebronner" ], "license": "mit", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:14266", "repo": "GeneaLabs/laravel-model-caching", "url": "https://github.com/GeneaLabs/laravel-model-caching/pull/24" }
gharchive/pull-request
Add column-based where test Fixes #16 Coverage remained the same at 98.561% when pulling 84092bbd9480610a0b297a73bd524be53bfa09a1 on fentie:add-where-column-test into 2b3f63ac5fed1e5611db482feae59b2c5e793d34 on GeneaLabs:master. Coverage remained the same at 98.561% when pulling 84092bbd9480610a0b297a73bd524be53bfa09a1 on fentie:add-where-column-test into 2b3f63ac5fed1e5611db482feae59b2c5e793d34 on GeneaLabs:master. Thanks for submitting this PR! Great work. I will make a few tweaks, and add it shortly.
2025-04-01T04:10:26.636814
2022-12-31T09:06:14
1515070964
{ "authors": [ "YoelShoshan", "jaekor91" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:14267", "repo": "Genentech/equifold", "url": "https://github.com/Genentech/equifold/issues/11" }
gharchive/issue
minibatch size when training Hi! "A mini-batch size of 8 was used with 8 A100 GPUs in Pytorch’s Distributed Data Parallel mode" does this mean that the total number of samples seen per optimizer step is: [mb on a single gpu] x [gpus # in distributed setting] 8 x 8 = 64 or 1 x 8 = 8 ? 1 x 8 = 8 was used
2025-04-01T04:10:26.669698
2021-03-16T19:47:35
833142209
{ "authors": [ "FGBxRamel", "rom1v" ], "license": "apache-2.0", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:14269", "repo": "Genymobile/scrcpy", "url": "https://github.com/Genymobile/scrcpy/issues/2201" }
gharchive/issue
Need a German translation? Do you need a German ReadMe? I could help with that. Translations are welcome :wink: https://github.com/Genymobile/scrcpy/issues/2150#issuecomment-786738338 Well thanks I googled but I'm not quite familiar with contributing to a Project which is not mine, so it's gonna take a while to set it up.
2025-04-01T04:10:26.673760
2021-07-02T05:28:58
935430027
{ "authors": [ "Arghaop71", "rom1v" ], "license": "apache-2.0", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:14270", "repo": "Genymobile/scrcpy", "url": "https://github.com/Genymobile/scrcpy/issues/2448" }
gharchive/issue
How can i record scrcpy screen without obs? [ ] I have read the FAQ. [ ] I have searched in existing issues. Environment OS: [e.g. Debian, Windows, macOS...] scrcpy version: [e.g. 1.12.1] installation method: [e.g. manual build, apt, snap, brew, Windows release...] device model: Android version: [e.g. 10] Describe the bug A clear and concise description of what the bug is. On errors, please provide the output of the console (and adb logcat if relevant). Please paste terminal output in a code block. Please do not post screenshots of your terminal, just post the content as text instead. https://github.com/Genymobile/scrcpy#recording It's even better to record directly, because the timestamps are captured on the device. https://github.com/Genymobile/scrcpy#recording (It's even better to record directly rather than via OBS, because the timestamps are captured on the device.) Where to put that recording commands? https://github.com/Genymobile/scrcpy/blob/master/FAQ.md#command-line-on-windows https://github.com/Genymobile/scrcpy/blob/master/FAQ.md#command-line-on-windows Will it record the sound if i use sndcpy at the same time? No. No. Ah Is there anyway to record scrcpy eith sound without obs?
2025-04-01T04:10:26.679569
2017-07-23T23:02:07
244942598
{ "authors": [ "badvision", "dwjohnston", "mprins" ], "license": "apache-2.0", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:14271", "repo": "GeoDienstenCentrum/sass-maven-plugin", "url": "https://github.com/GeoDienstenCentrum/sass-maven-plugin/issues/156" }
gharchive/issue
Error running in Jenkins This is working fine in my development environment. When I run from Jenkins - I get the following errors. However - it looks like possibly it is using the local jenkins ruby to run in the jenkins environment. <plugin> <groupId>nl.geodienstencentrum.maven</groupId> <artifactId>sass-maven-plugin</artifactId> <version>2.25</version> <executions> <execution> <id>compile</id> <goals> <goal>update-stylesheets</goal> </goals> <phase>compile</phase> <configuration> <sassSourceDirectory>${basedir}/src/versioned/sass</sassSourceDirectory> <destination>${project.build.directory}/compiled-sass/css</destination> </configuration> </execution> </executions> </plugin> Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=256m; support was removed in 8.0 <===[JENKINS REMOTING CAPACITY]===>channel started Executing Maven: -B -f /var/lib/jenkins/jobs/Build Common Assets/workspace/pom.xml clean install -X -U Apache Maven 3.3.9 (bb52d8502b132ec0a5a3f4c09453c07478323dc5; 2015-11-11T05:41:47+13:00) Maven home: /var/lib/jenkins/tools/hudson.tasks.Maven_MavenInstallation/Maven_3.3.9 Java version: 1.8.0_72, vendor: Oracle Corporation Java home: /usr/lib/jvm/java-8-oracle/jre Default locale: en_NZ, platform encoding: UTF-8 OS name: "linux", version: "3.13.0-125-generic", arch: "amd64", family: "unix" [SNIP] [INFO] --- sass-maven-plugin:2.25:update-stylesheets (compile) @ common-assets --- [INFO] Compiling Sass templates [INFO] No resource element was specified, using short configuration. [DEBUG] Setting source directory: /var/lib/jenkins/jobs/Build Common Assets/workspace/src/versioned/sass [DEBUG] Setting includes: [**/*.scss] [INFO] Queueing Sass template for compile: /var/lib/jenkins/jobs/Build Common Assets/workspace/src/versioned/sass => /var/lib/jenkins/jobs/Build Common Assets/workspace/target/compiled-sass/css [DEBUG] Execute Sass Ruby script: require 'rubygems' env = { 'GEM_PATH' => [ '/var/lib/jenkins/jobs/Build Common Assets/workspace/target/rubygems', 'file', '/var/lib/jenkins/plugins/ruby-runtime/WEB-INF/lib/stapler-jruby-1.209.jar!/gem' ].uniq.join(File::PATH_SEPARATOR) } Gem.paths = env require 'sass/plugin' require 'java' Sass::Plugin.options.merge!( :css_location => '/var/lib/jenkins/jobs/Build Common Assets/workspace/target/compiled-sass/css', :cache => true, :always_update => true, :template_location => '/var/lib/jenkins/jobs/Build Common Assets/workspace/src/versioned/sass', :cache_location => '/var/lib/jenkins/jobs/Build Common Assets/workspace/target/sass_cache', :unix_newlines => true, :style => :expanded ) Sass::Plugin.on_compilation_error {|error, template, css| $compiler_callback.compilationError(error.message, template, css) } Sass::Plugin.on_updated_stylesheet {|template, css| $compiler_callback.updatedStylesheeet(template, css) } Sass::Plugin.on_template_modified {|template| $compiler_callback.templateModified(template) } Sass::Plugin.on_template_created {|template| $compiler_callback.templateCreated(template) } Sass::Plugin.on_template_deleted {|template| $compiler_callback.templateDeleted(template) } require 'pp' pp Sass::Plugin.options Sass::Plugin.update_stylesheets Sass is in the process of being separated from Haml, and will no longer be bundled at all in Haml 3.2.0. Please install the 'sass' gem if you want to use Sass. NoMethodError: undefined method `on_updated_stylesheet' for Sass::Plugin:Module Did you mean? update_stylesheets method_missing at org/jruby/RubyBasicObject.java:1657 method_missing at /var/lib/jenkins/plugins/ruby-runtime/WEB-INF/lib/stapler-jruby-1.209.jar!/gem/gems/haml-3.1.1/vendor/sass/lib/sass/plugin.rb:113 <main> at <script>:20 nb. If SSH into the jenkins box directly, and run mvn clean install -X directly there - it doesn't have a problem, and that ruby line looks like: require 'rubygems' env = { 'GEM_PATH' => [ '/var/lib/jenkins/jobs/Build Common Assets/workspace/target/rubygems' ].uniq.join(File::PATH_SEPARATOR) } This suggests that Jenkins itself if somehow modifying how the plugin is being run. Ok, I think I've found the issue. In AbstractSassMojo.java you have ` sassScript.append("require 'rubygems'\n"); if (this.gemPaths.length > 0) { sassScript.append("env = { 'GEM_PATH' => [\n"); for (final String gemPath : this.gemPaths) { sassScript.append(" '").append(gemPath).append("',\n"); } final String gemPath = System.getenv("GEM_PATH"); if (gemPath != null) { for (final String p : gemPath.split(File.pathSeparator)) { sassScript.append(" '").append(p).append("',\n"); } } /* remove trailing comma+\n */ sassScript.setLength(sassScript.length() - 2); // TODO // quick fix for the deprecation message coming from Gem.paths, this should be cleaned up; // there's a round trip of splitting into array in java and then unsplitting the array in ruby... // see #118 sassScript.append("\n].uniq.join(File::PATH_SEPARATOR) }\n"); sassScript.append("Gem.paths = env\n"); }` What this is going to do is add the GEM_PATH environment variable to the ruby script if it exists. This environment variable exists on my jenkins build server, but not in my development environment. For now - to solve this issue I'll see if I can run the the jenkins job unsetting the environment variable. But I think there's a question here about whether that it's required to get that environment variable, or the option to skip it should be allowed. To solve via Jenkins: In your Jenkins build configuration: Check prepare an environment for the run In properties content add GEM_PATH= You appear to have a jenkins plugin with stapler and/or ruby-runtime, this is messing things up for you. This sets up an incompatible environment. It might be incompatible but is it possible for your code to be slightly more defensive to prevent this? Even installing the Travis-CI plugin will cause this because it also has a dependency on that out-dated ruby-runtime plugin. You and I both know the real culprit is the ruby-runtime plugin, but is there a way for this to at least side-step the issue until the ruby plugin gets its act together? picking up the gempath is a feature of the plugin, it is useful on the desktop. You can use the workaround shown above.
2025-04-01T04:10:26.680536
2014-05-12T11:10:27
33298641
{ "authors": [ "alegrm" ], "license": "apache-2.0", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:14272", "repo": "GeoKnow/GeoKnowGeneratorUI", "url": "https://github.com/GeoKnow/GeoKnowGeneratorUI/issues/34" }
gharchive/issue
Remove configuration from config.js Since we are now having services for authentication we require to have configuration parameters in the server side (FrameworkConfiguration.java). Thus we have duplicated configuration files. We can create a service to provide settings data on server side and use this in config.js Wont be fixed for the generator
2025-04-01T04:10:26.761821
2021-01-21T21:35:02
791510525
{ "authors": [ "MortezaYaqubkhani", "bertt", "fnicollet", "jailln", "tebben" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:14273", "repo": "Geodan/i3dm.export", "url": "https://github.com/Geodan/i3dm.export/issues/17" }
gharchive/issue
Plans for Cesium compatibility Hello, This is a follow-up on this ticket: https://github.com/Geodan/i3dm.export/issues/15 Mostly to explain where I'm at and start a conversation about it :) With a few modifications in the source code, I managed to create i3dm tiles compatible with a Cesium viewer. Here is a demonstration: https://sandcastle.cesium.com/#c=dVPBbtswDP0VIaesaCQMvWVpMDQrukOHFWvWXnyRLSbhJksZKTtrh/37aMtB06TzRTDf4yP5KLWWVAWMTb0EIovhjmKLDkhdqkUf1xWBTfAYybuBM373oQitZIL3uOWI7jQ5wG4vcP0fVi+ThVqE3XHaQx8bF6Pc3yKGJMlAxehc/SmCkq+0DLf2CegOq59AU5WogfMOswFrmzCGqVpZzznIFQT4Eh3s6S9Qwhq8qB/G1hArIb/ibWINV01Kr4WDbXHdl/sMfnuKK7VqvGdxEsIpSvCrAU7fIEixrr08RxH+7n3uGxd3sk26/x2ghB4Y0mvr8nHxaZnB8WBXQ36qitEmpS1PjXHQ6rJhmZp5IsOuya6w0lWsTel+Ty7cxNlkDV642lgqCXiSV2GGovoHxyDryOrGHOmT3ek1pk0jRYAq2R+E1Mvn/m7ujdTotSZs662cpracgPYF2AwTPIrMUty7Qu/LaMnxaQ9FMme5Ewdls77fxN1VbILDsH6Ivqnh4HoMnEX0kfAZ+iqncCdxA7LxRFhdE0V6m/O9mzuvrEhnZr+4g2XpLWGNCVtgbZ0bD80f0J5jrJfxBRidj2acnjzMczGlPmK9jZS6LY61NgnEMXmYbMpGbrMYy9zpddSZOUydOWwVuss3XpKqvGUWpLuf9+JEMZrPjPBPUn20nZNfWyBvnzra5v38Nge11jMjv29nphh9aelI+R8 The dataset is about 31K trees, the same that i used in the mapbox-gl-js viewer: Here are the modifications i did in the source code to make it work: Moved away from your "transform+box" (https://github.com/CesiumGS/3d-tiles/tree/master/specification#box) bounding volume to a "region" one (https://github.com/CesiumGS/3d-tiles/tree/master/specification#region). It makes the generated tileset.json a lot clearer and easier to debug. Not sure why you used transform+box in the first place in your code ? Compatibility with code you already had for 3D tiles? Used radians instead of degrees as stated in the format specifications (lost a good 2 hours on that one ...) For instance positions, use "Cartesian3" (https://cesium.com/docs/cesiumjs-ref-doc/Cartesian3.html) coordinates instead of 3857. Had to convert some of the Cesium JS code to C# to be able to convert lat/lng to these coordinates Used some hard-coded values for the "rotation" vector as I couldn't really figure out what the specification expects (and I want my trees to point mostly up, so that's fine :) ) Don't reproject geometries to 3857 and keep everything in 4326 (I don't know if that's what made a difference, but the process was 20 minutes before and now it's about 3 seconds) Changed the -s and -e values in my command line to much smaller values as these are now degrees. All these would be "breaking changes" for you as they wouldn't be compatible with the code you created to load i3dm in mapbox-gl-js (https://github.com/Geodan/mapbox-3dtiles), so I don't know if you really want this in your codebase and if you want to maintain it. It would also need some more work so make it really compatible with both systems (for instance, the tile size should be calculated in number of tiles, not in units). For usage in our software, I will probably have to rewrite all this C# code to something more "multi-platform" (python or java) so contributing to this repo won't be very beneficial for me either, but it might be for other people willing to use Cesium. As you are the only maintainer of this repo, i would like to hear your thoughts about this :) Thanks, Fabien Great work! Adding Cesium support is low on our priority list, since we only use MapBox GL JS as client. But it would be nice to get the tileset also working in Cesium. Required changes should be non-breaking and have low maintenance. Some more remarks: boundingVolume.region: good suggestion, we use transform box until now because there is no support (yet) for region in MapBox-3DTiles extension. I've created an issue: https://github.com/Geodan/mapbox-3dtiles/issues/43 rotations: those are hard to get right in Cesium multi platform: this tool works on Windows/Linux/Mac Great work! Adding Cesium support is low on our priority list, since we only use MapBox GL JS as client. But it would be nice to get the tileset also working in Cesium. Required changes should be non-breaking and have low maintenance. Some more remarks: boundingVolume.region: good suggestion, we use transform box until now because there is no support (yet) for region in MapBox-3DTiles extension. I've created an issue: https://github.com/Geodan/mapbox-3dtiles/issues/43 rotations: those are hard to get right in Cesium multi platform: this tool works on Windows/Linux/Mac Thanks for the feedback :) For now, i'll keep my fork locally, creating a "clean PR" with a new parameter and no hard-coded values would take me too much time. So i'll keep this issue opened, and if someone else gets interested in the future, i'll make an effort to clean and share the work Thanks for your quick help on this! Fabien Thanks for the feedback :) For now, i'll keep my fork locally, creating a "clean PR" with a new parameter and no hard-coded values would take me too much time. So i'll keep this issue opened, and if someone else gets interested in the future, i'll make an effort to clean and share the work Thanks for your quick help on this! Fabien Hi, I'm pretty new to Cesium and the concept of the 3D tileset. So some of my questions might look a little bit unrelated to the purpose of this topic, My apologies ... I am starting a new project and the objective is to map millions of trees with Cesium. I found your work one of the few related works to my project. I have few questions about the work you've done and a few general questions about Cesium: from my understanding, the individual characteristics of each tree (such as location or shape) are determined in the .i3dm files. Is that correct? Could you please share some of the .i3dm files used in this project and elaborate a little bit on how to produce them? What is your idea about how to create a .i3dm file for a geospatial database? ( do you have any specific method or preference?) In case I want to showcase different types of trees in my project, do I have to create a separate .i3dm file for each type? Is it possible for me to use Cesium ION for these kinds of projects and don't bother to make tileset by myself. I've tried to see the existing solutions and tutorials on how to use Cesium ION for these kinds of visualization (instanced 3D models), but did not find a suitable reference, can you introduce a resource to me? Hi, Some answers: Samples: in this repository there are sample i3dm MapBox viewers and i3dm tiles. The i3dm's are created using this tool (i3dm-export). The tool takes a glTF model and instance positions/scales/rotations (from PostGIS) as input basically. If you have different types you have to create multiple i3dm's, with a unique glTF model as input. These tiles can be combined in a composite (cmpt) tile (also from 3D Tiles spec). Cesium Ion is an alternative method, you have to check on their site. Thanks for the reply! Hi, Some answers: Samples: in this repository, there are sample i3dm MapBox viewers and i3dm tiles. The i3dm's are created using this tool (i3dm-export). The tool takes a glTF model and instance positions/scales/rotations (from PostGIS) as input basically. If you have different types you have to create multiple i3dm's, with a unique glTF model as input. These tiles can be combined in a composite (cmpt) tile (also from 3D Tiles spec). Cesium Ion is an alternative method, you have to check on their site. Hi, While I am looking for ways to use Cesium ION for my project (tree visualization), I've decided to try open-source solutions as well, and creating 3d tiles for Mapbox (same as what you have done) seems like a good start. So I started to use this repository. However, I faced some issues ( I suppose it is because of my limited knowledge on . Net) following your steps. After installing .NET 5.0 SDK , and cloning this repository to my machine when I run the command "dotnet tool install -g i3dm.export" I face the following error: error NU1100: Unable to resolve 'i3dm.export (>= 0.0.0)' for 'net5.0'. error NU1100: Unable to resolve 'i3dm.export (>= 0.0.0)' for 'net5.0/any'. The tool package could not be restored. Tool 'i3dm.export' failed to install. This failure may have been caused by: * You are attempting to install a preview release and did not use the --version option to specify the version. * A package by this name was found, but it was not a .NET tool. * The required NuGet feed cannot be accessed, perhaps because of an Internet connection problem. * You mistyped the name of the tool. For more reasons, including package naming enforcement, visit https://aka.ms/failure-installing-tool Could you please give me a hint on what's the problem here?! Thanks Hi, command 'dotnet tool install -g i3dm.export' should work, what do you get with 'dotnet --version' ? What with 'dotnet nuget list source'? Note: you don't have to clone the source code when installing this tool. Hi again, dotnet --version: 5.0.202 dotnet nuget list source: No source found run 'dotnet nuget add source https://api.nuget.org/v3/index.json -n nuget.org' and try again Tool 'i3dm.export' (version '1.8.0') was successfully installed. Many Thanks Hello, This is a follow-up on this ticket: #15 Mostly to explain where I'm at and start a conversation about it :) With a few modifications in the source code, I managed to create i3dm tiles compatible with a Cesium viewer. Here is a demonstration: https://sandcastle.cesium.com/#c=dVPBbtswDP0VIaesaCQMvWVpMDQrukOHFWvWXnyRLSbhJksZKTtrh/37aMtB06TzRTDf4yP5KLWWVAWMTb0EIovhjmKLDkhdqkUf1xWBTfAYybuBM373oQitZIL3uOWI7jQ5wG4vcP0fVi+ThVqE3XHaQx8bF6Pc3yKGJMlAxehc/SmCkq+0DLf2CegOq59AU5WogfMOswFrmzCGqVpZzznIFQT4Eh3s6S9Qwhq8qB/G1hArIb/ibWINV01Kr4WDbXHdl/sMfnuKK7VqvGdxEsIpSvCrAU7fIEixrr08RxH+7n3uGxd3sk26/x2ghB4Y0mvr8nHxaZnB8WBXQ36qitEmpS1PjXHQ6rJhmZp5IsOuya6w0lWsTel+Ty7cxNlkDV642lgqCXiSV2GGovoHxyDryOrGHOmT3ek1pk0jRYAq2R+E1Mvn/m7ujdTotSZs662cpracgPYF2AwTPIrMUty7Qu/LaMnxaQ9FMme5Ewdls77fxN1VbILDsH6Ivqnh4HoMnEX0kfAZ+iqncCdxA7LxRFhdE0V6m/O9mzuvrEhnZr+4g2XpLWGNCVtgbZ0bD80f0J5jrJfxBRidj2acnjzMczGlPmK9jZS6LY61NgnEMXmYbMpGbrMYy9zpddSZOUydOWwVuss3XpKqvGUWpLuf9+JEMZrPjPBPUn20nZNfWyBvnzra5v38Nge11jMjv29nphh9aelI+R8 The dataset is about 31K trees, the same that i used in the mapbox-gl-js viewer: Here are the modifications i did in the source code to make it work: Moved away from your "transform+box" (https://github.com/CesiumGS/3d-tiles/tree/master/specification#box) bounding volume to a "region" one (https://github.com/CesiumGS/3d-tiles/tree/master/specification#region). It makes the generated tileset.json a lot clearer and easier to debug. Not sure why you used transform+box in the first place in your code ? Compatibility with code you already had for 3D tiles? Used radians instead of degrees as stated in the format specifications (lost a good 2 hours on that one ...) For instance positions, use "Cartesian3" (https://cesium.com/docs/cesiumjs-ref-doc/Cartesian3.html) coordinates instead of 3857. Had to convert some of the Cesium JS code to C# to be able to convert lat/lng to these coordinates Used some hard-coded values for the "rotation" vector as I couldn't really figure out what the specification expects (and I want my trees to point mostly up, so that's fine :) ) Don't reproject geometries to 3857 and keep everything in 4326 (I don't know if that's what made a difference, but the process was 20 minutes before and now it's about 3 seconds) Changed the -s and -e values in my command line to much smaller values as these are now degrees. All these would be "breaking changes" for you as they wouldn't be compatible with the code you created to load i3dm in mapbox-gl-js (https://github.com/Geodan/mapbox-3dtiles), so I don't know if you really want this in your codebase and if you want to maintain it. It would also need some more work so make it really compatible with both systems (for instance, the tile size should be calculated in number of tiles, not in units). For usage in our software, I will probably have to rewrite all this C# code to something more "multi-platform" (python or java) so contributing to this repo won't be very beneficial for me either, but it might be for other people willing to use Cesium. As you are the only maintainer of this repo, i would like to hear your thoughts about this :) Thanks, Fabien Hi Fabien, I want to do the same thing as you and create 3Dtilesets to use in Cesium. I was able to produce some results with i3dm.export to use in Maobix-GL. I want to start to modify the original codes and follow your method. However, it is a bit unclear to me. To start, could you please share tileset.json files and .i3dm files so I can make a comparison between the two. regards Hello, Here is the tileset.json (you can see it in the Sandcastle example i shared): https://dev.business-geografic.com/bdx-3d-data/i3dm/arbres-cesium/tileset.json In the tileset.json file, you will find the path to hte i3dm files, such as: https://dev.business-geografic.com/bdx-3d-data/i3dm/arbres-cesium/tiles/0_0_6_2.i3dm Hope it helps, Fabien Hello, Here is the tileset.json (you can see it in the Sandcastle example i shared): https://dev.business-geografic.com/bdx-3d-data/i3dm/arbres-cesium/tileset.json In the tileset.json file, you will find the path to hte i3dm files, such as: https://dev.business-geografic.com/bdx-3d-data/i3dm/arbres-cesium/tiles/0_0_6_2.i3dm Hope it helps, Fabien Thanks for the answer. I don't know if it is me doing something wrong or what? It's been a while I am trying to run your Sandcastle example, but it seems that nothing is loaded on the page (I can not see any trees!) except the base map. I even tried to import your tileset and .i3dm files in Cesium Ion, but it shows only a white frame on the map. Could you please take a look and see what's the problem! Thanks, Morteza The dataset is only visible when you zoom in, on that location: As for importing i3dm files in Cesium ION, I don't think it's a supported format unfortunately: https://cesium.com/docs/tutorials/uploading/ Thanks for the reply. I have seen it. By the way, if you upload the tileset.json file with corresponding .i3dm files into Cesuim ION, you won't face any problem and your 3d tileset will be hosted by Cesium. I did the same thing with your tileset files and it worked. Thanks again for your help! ah that's good to know, thank you ! On Mon, Apr 26, 2021 at 6:03 PM MortezaYaqubkhani @.***> wrote: Thanks for the reply. I have seen it. By the way, if you upload the tileset.json file with corresponding .i3dm files into Cesuim ION, you won't face any problem and your 3d tileset will be hosted by Cesium. I did the same thing with your tileset files and it worked. [image: image] https://user-images.githubusercontent.com/61637189/116114040-4d3dfa00-a6b9-11eb-9b76-5f5bb59f3694.png Thanks again for your help! — You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/Geodan/i3dm.export/issues/17#issuecomment-826956724, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAJO3IMTFZ7UXBD3UBLGJZTTKWFE3ANCNFSM4WNRI2RA . Hello, This is a follow-up on this ticket: #15 Mostly to explain where I'm at and start a conversation about it :) With a few modifications in the source code, I managed to create i3dm tiles compatible with a Cesium viewer. Here is a demonstration: https://sandcastle.cesium.com/#c=dVPBbtswDP0VIaesaCQMvWVpMDQrukOHFWvWXnyRLSbhJksZKTtrh/37aMtB06TzRTDf4yP5KLWWVAWMTb0EIovhjmKLDkhdqkUf1xWBTfAYybuBM373oQitZIL3uOWI7jQ5wG4vcP0fVi+ThVqE3XHaQx8bF6Pc3yKGJMlAxehc/SmCkq+0DLf2CegOq59AU5WogfMOswFrmzCGqVpZzznIFQT4Eh3s6S9Qwhq8qB/G1hArIb/ibWINV01Kr4WDbXHdl/sMfnuKK7VqvGdxEsIpSvCrAU7fIEixrr08RxH+7n3uGxd3sk26/x2ghB4Y0mvr8nHxaZnB8WBXQ36qitEmpS1PjXHQ6rJhmZp5IsOuya6w0lWsTel+Ty7cxNlkDV642lgqCXiSV2GGovoHxyDryOrGHOmT3ek1pk0jRYAq2R+E1Mvn/m7ujdTotSZs662cpracgPYF2AwTPIrMUty7Qu/LaMnxaQ9FMme5Ewdls77fxN1VbILDsH6Ivqnh4HoMnEX0kfAZ+iqncCdxA7LxRFhdE0V6m/O9mzuvrEhnZr+4g2XpLWGNCVtgbZ0bD80f0J5jrJfxBRidj2acnjzMczGlPmK9jZS6LY61NgnEMXmYbMpGbrMYy9zpddSZOUydOWwVuss3XpKqvGUWpLuf9+JEMZrPjPBPUn20nZNfWyBvnzra5v38Nge11jMjv29nphh9aelI+R8 The dataset is about 31K trees, the same that i used in the mapbox-gl-js viewer: Here are the modifications i did in the source code to make it work: Moved away from your "transform+box" (https://github.com/CesiumGS/3d-tiles/tree/master/specification#box) bounding volume to a "region" one (https://github.com/CesiumGS/3d-tiles/tree/master/specification#region). It makes the generated tileset.json a lot clearer and easier to debug. Not sure why you used transform+box in the first place in your code ? Compatibility with code you already had for 3D tiles? Used radians instead of degrees as stated in the format specifications (lost a good 2 hours on that one ...) For instance positions, use "Cartesian3" (https://cesium.com/docs/cesiumjs-ref-doc/Cartesian3.html) coordinates instead of 3857. Had to convert some of the Cesium JS code to C# to be able to convert lat/lng to these coordinates Used some hard-coded values for the "rotation" vector as I couldn't really figure out what the specification expects (and I want my trees to point mostly up, so that's fine :) ) Don't reproject geometries to 3857 and keep everything in 4326 (I don't know if that's what made a difference, but the process was 20 minutes before and now it's about 3 seconds) Changed the -s and -e values in my command line to much smaller values as these are now degrees. All these would be "breaking changes" for you as they wouldn't be compatible with the code you created to load i3dm in mapbox-gl-js (https://github.com/Geodan/mapbox-3dtiles), so I don't know if you really want this in your codebase and if you want to maintain it. It would also need some more work so make it really compatible with both systems (for instance, the tile size should be calculated in number of tiles, not in units). For usage in our software, I will probably have to rewrite all this C# code to something more "multi-platform" (python or java) so contributing to this repo won't be very beneficial for me either, but it might be for other people willing to use Cesium. As you are the only maintainer of this repo, i would like to hear your thoughts about this :) Thanks, Fabien Hey Fabien, I just have another question regarding Cesium compatibility. Have you made the above changes to the source code or manually to the results (.json files)? For example, when you are talking about changing bounding volume from "box" to office! Thanks Morteza Yes, i've made several changes to the source code so that the output could be loaded in Cesium. At the moment, the code is not shared or in a Pull Request, and there are still some bugs regarding the 3D models orientation (rotations). It was the point of this github issue, wether @bertt wanted to support it "officially" or not (see his answer in this thread) Fabien I gave Cesium support a try, it can be found in this branch: https://github.com/Geodan/i3dm.export/tree/feature/cesium-support Run i3dm.export with the --cesium parameter to export a Cesium compatible tileset. BoundingBox3D in epsg 3857 is still used for tile calculations but gets converted using PostGIS to 4979 for the JSON export so no messing around is needed for the -s and -e parameters. For --cesium a region BoundingVolume is used instead of box Model positions are transformed from 4326 to 4978 (ECEF) so that they are compatible with Cesium. Added GetLocalEnuCesium to calculate the correct rotation for a model at a certain position on the Globe. CMPT tiles don't seem to work yet, Cesium throws the following error: Error: start offset of Float32Array should be a multiple of 4 I gave Cesium support a try, it can be found in this branch: https://github.com/Geodan/i3dm.export/tree/feature/cesium-support Run i3dm.export with the --cesium parameter to export a Cesium compatible tileset. * BoundingBox3D in epsg 3857 is still used for tile calculations but gets converted using PostGIS to 4979 for the JSON export so no messing around is needed for the -s and -e parameters. * For --cesium a region BoundingVolume is used instead of box * Model positions are transformed from 4326 to 4978 (ECEF) so that they are compatible with Cesium. * Added GetLocalEnuCesium to calculate the correct rotation for a model at a certain position on the Globe. CMPT tiles don't seem to work yet, Cesium throws the following error: Error: start offset of Float32Array should be a multiple of 4 Thanks for this contribution! :) I gave it a try with the following dataset: https://data.grandlyon.com/jeux-de-donnees/arbres-alignement-metropole-lyon/telechargements I ran into the following error: MessageText: GetProj4StringSPI: Cannot find SRID (4979) in spatial_ref_sys so I added 4979 to the spatial_ref_sys table (c.f. https://spatialreference.org/ref/epsg/4979/postgis/ but change 94979 to 4979). It resolved the error and the 3D Tiles gets created. The tileset.json file is loaded in Cesium but tiles seems to be inside the globe and the .i3dm files never gets loaded. Do you have any idea why? In which SRID should be the input data ? 3857 or 4326 ? In addition, if I'm not mistaken, Cesium is in 4978, why do you need 4979 then? Cheers Try with https://epsg.io/4978 Input data epsg requirement hasn't changed so it should be in 4326. 4979 is needed for the bounding volume region, see https://github.com/CesiumGS/3d-tiles/tree/main/specification#region I didn't had to do anything to my PostGIS installation to get it working so i'm not sure why 4979 isn't there on your installation. I do remember having to change to last Geometric error in the output JSON from 0 to a higher number to load the tiles. I will have a go tomorrow on the dataset you posted. In my test there was no supertileset generated, the bounds for the supertileset weren't converted which resulted in a wrong location and the i3dm tiles not loading. I pushed an update. Loaded GeoJSON into PostGIS ogr2ogr -f "PostgreSQL" PG:"host=myhost dbname=mydb user=myuser password=mypassword" -nln i3dm.arbres abr_arbres_alignement.abrarbre.json Added needed columns for i3dm.export -- rename geometry column to expected geom ALTER TABLE i3dm.arbres RENAME COLUMN wkb_geometry TO geom; -- add tags column ALTER TABLE i3dm.arbres ADD tags varchar; -- add rotation column ALTER TABLE i3dm.arbres ADD rotation int; -- set random rotation UPDATE i3dm.arbres SET rotation = (random()*360)::int; -- add model column ALTER TABLE i3dm.arbres ADD model varchar(255); -- set url to external model UPDATE i3dm.arbres SET model = 'https://myhost/i3dm/models/tree_6.glb'; -- add scale column ALTER TABLE i3dm.arbres ADD scale numeric; -- set scale based on hauteurtotale_m, 7.8 is the model height in meters UPDATE i3dm.arbres SET scale = round(CAST(hauteurtotale_m/7.8 as numeric),2)::float WHERE hauteurtotale_m != 0; -- set random scale between 5 and 20 meters where the tree height is unknown UPDATE i3dm.arbres SET scale = round(CAST(floor(random() * (5 - 20) + 20)/7.8 as numeric),2)::float WHERE hauteurtotale_m = 0; Thanks! It works like charm. Do you consider opening a PR ? I did a quick test with the getting started data (https://github.com/Geodan/i3dm.export/blob/main/docs/getting_started.md) and adding extra --cesium parameter. Results: I did have to add 4979 to the spatial_ref_sys_table (like @jailln) INSERT into spatial_ref_sys (srid, auth_name, auth_srid, proj4text, srtext) values ( 4979, 'epsg', 4979, '+proj=longlat +ellps=WGS84 +datum=WGS84 +no_defs ', 'GEOGCS["WGS 84",DATUM["World Geodetic System 1984",SPHEROID["WGS 84",6378137.0,298.257223563,AUTHORITY["EPSG","7030"]],AUTHORITY["EPSG","6326"]],PRIMEM["Greenwich",0.0,AUTHORITY["EPSG","8901"]],UNIT["degree",0.017453292519943295],AXIS["Geodetic latitude",NORTH],AXIS["Geodetic longitude",EAST],AXIS["Ellipsoidal height",UP],AUTHORITY["EPSG","4979"]]'); In the browser I now see red boxes in Amsterdam, but only shortly at startup. When zooming in the boxes disappear. Got it fixed by not using the terrain in the Cesium client: From var viewer = new Cesium.Viewer('cesiumContainer', { terrainProvider : Cesium.createWorldTerrain() }); to: var viewer = new Cesium.Viewer('cesiumContainer', { // terrainProvider : Cesium.createWorldTerrain() }); Show batch table information is also working Maybe epsg 4979 can be changed to 4326 so no changes are needed to PostGIS, the only difference is that 4979 is 3D and 4326 2D but the heights for boundingVolume.region are in meters above or below WGS 84 ellipsoid anyway. It does look like that there is a slight offset with the rotation, I see it in your demo but also in my tests with cars, street lights and trees. I need to look into this. We also should look into cmpt tiles before creating a PR, I think the problem sits in the i3dm.tile library. Updated code with Cesium support is in branch feature/cesium-support (will be merged soon). Also the Getting started document is updated with Cesium https://github.com/Geodan/i3dm.export/blob/feature/cesium-support/docs/getting_started.md So closing this issue.
2025-04-01T04:10:26.965299
2016-10-05T20:30:43
181257846
{ "authors": [ "Gericop", "yccheok" ], "license": "apache-2.0", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:14274", "repo": "Gericop/Android-Support-Preference-V7-Fix", "url": "https://github.com/Gericop/Android-Support-Preference-V7-Fix/issues/47" }
gharchive/issue
Some question on build.gradle Hi Gergely Kőrössy, Sorry for that! Since I don't find any better channel to contact you to ask about non-project thingy, I decide to open this issues in order able to ping you. Recently, I wish to fork some Android library projects from GitHub, perform some modification, publish to maven (or jcenter) for my own usage purpose. So that I can use them by having the following in my build.gradle compile 'com.yccheok:my-forked-library-v7:24' After reading https://inthecheesefactory.com/blog/how-to-upload-library-to-jcenter-maven-central-as-dependency/en I expect I shall saw some similar info in your build.gradle apply plugin: 'com.android.library' ext { bintrayRepo = 'maven' bintrayName = 'fb-like' publishedGroupId = 'com.inthecheesefactory.thecheeselibrary' libraryName = 'FBLike' However, I found none. They just seem like a normal build.gradle for non-library project. May I know what is the secret behind, of publishing your Android-Support-Preference-V7-Fix library? Thank you very much! Cheok @yccheok It's in the preference-v7 module: https://github.com/Gericop/Android-Support-Preference-V7-Fix/blob/master/preference-v7/build.gradle @Gericop Wow. Thanks for the speedy respond. My bad! I should check more carefully. Thank you very much.
2025-04-01T04:10:26.978655
2020-06-10T09:21:00
636084932
{ "authors": [ "Svito-zar", "whisperzh" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:14275", "repo": "GestureGeneration/Speech_driven_gesture_generation_with_autoencoder", "url": "https://github.com/GestureGeneration/Speech_driven_gesture_generation_with_autoencoder/issues/5" }
gharchive/issue
How to specify using Independent Graphic card? I successfully get the samples trained, but it seems that i was using my intergrated graphic card. Could you help me I assume that your problem is that the training was not utilizing your GPU. The following aspects are important to ensure that you use your GPU: Install Tensorflow which supports GPU: pip install tensorflow-gpu==1.14.0 Specify the GPU to be used during the training: CUDA_VISIBLE_DEVICES=1 python train.py MODEL_NAME EPOCHS DATA_DIR N_INPUT ENCODE DIM Hope that helps. If it does not - please provide more details about your issue. I close this issue due to inactivity
2025-04-01T04:10:26.997408
2016-11-20T17:23:47
190570776
{ "authors": [ "mathias" ], "license": "bsd-3-clause", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:14276", "repo": "GetStream/Winds", "url": "https://github.com/GetStream/Winds/pull/48" }
gharchive/pull-request
WIP: Add Apple touch icons for bookmarks on mobile Problem: When this app is bookmarked through "Add to Home" on Apple devices, a screenshot is used rather than the icon. When it is bookmarked, the icon should appear here: Solution: Per https://developer.apple.com/library/content/documentation/AppleApplications/Reference/SafariWebContent/ConfiguringWebApplications/ConfiguringWebApplications.html , I added new <link> tags to layout. I generated the icons based off of the 1024x1024 icon.png. These files are in /.tmp/public/img/icons/ and I've added the link tags to the head in both layouts (but not committed the compiled changes to the layouts yet.) When I check on the iOS simulator, the icons aren't yet used and when I try to navigate to the assets locally (ie, http://localhost:1337/img/icons/touch-icon-iphone-60x60.png ) it isn't found. I'm not sure if the assets aren't checked in here, because I'd expect I'd be adding the files to assets/img instead of /.tmp/public/img/. If someone gives me some pointers on how to get the assets in the build, I can probably get this simple change over the line. Thanks! It doesn't look like this is working on the production site. Is there anything special that needs to be done to get this working in the webpack assets?
2025-04-01T04:10:27.077979
2015-10-20T02:35:50
112273715
{ "authors": [ "EionRobb", "GianlucaGuarini" ], "license": "mit", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:14277", "repo": "GianlucaGuarini/Tocca.js", "url": "https://github.com/GianlucaGuarini/Tocca.js/pull/29" }
gharchive/pull-request
Make longtap trigger after timeout To give a slightly more "native-y" feel to the longtap event, this PR makes the longtap fire after the timeout, rather than waiting until after the touchend Awesome thanks! This is worth a new release but I would like to fix other small issues first
2025-04-01T04:10:27.079422
2011-12-20T07:41:51
2610117
{ "authors": [ "GianlucaGuarini", "neriweaver" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:14278", "repo": "GianlucaGuarini/jQuery.html5loader", "url": "https://github.com/GianlucaGuarini/jQuery.html5loader/issues/3" }
gharchive/issue
Preload dont work if there's no video Hi, when I tried to preload images and audios, the preloader doesnt work or it halt on a certain position. I dont know if this is a bug or Im doing something wrong. If I preload images alone the the same thing happens, when I preload images and audio it doesnt work, but if load audio alone, or video alone, or video and images, it works. Thanks for reporting bug. Can you send me the link of your page? Does it happen on any browser? Hi, sorry I found that I'm pointing on the wrong url of the files from the json. Sorry MY BAD! =) Just a quick question, can I preload images from CSS? Background images? Thanks again! you cannot have any access to css files from javascript so you can't preload a background image. You can do something triky printing your background image inside your html page using display none..
2025-04-01T04:10:27.082832
2022-04-21T20:16:28
1211508530
{ "authors": [ "GiantMolecularCloud", "saversux" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:14279", "repo": "GiantMolecularCloud/my-resume", "url": "https://github.com/GiantMolecularCloud/my-resume/pull/9" }
gharchive/pull-request
Add tex pipeline to build resume This adds a github action to build the pdf file. I renamed resume.pdf to avoid conflicts in the action. The output of the action can be downloaded from github, its saved as an artifact. I also added a .gitignore file. Very good idea to compile using an action. Thank you for provinding the solution as well!
2025-04-01T04:10:27.084547
2021-05-17T00:45:43
892800737
{ "authors": [ "Alexiythymia" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:14280", "repo": "Gilbert142/gomori", "url": "https://github.com/Gilbert142/gomori/issues/20" }
gharchive/issue
Zip file reading of mods doesn't appear to work I got the gomori mod itself to load and run fine using the provided instructions. When I attempted to run the decryptor mod by placing the zip file inside www/mods, but I kept getting an error (trying to read a property value from null). I fiddled with it a bit and eventually got the decryptor to work by extracting the contents of the zip folder into the mod folder. It also appears that running the decryptor has removed the gomori mod (the text isn't on the title screen anymore and the mod menu in options is gone), but I think that may be the decryptor's fault, I'll play with it a little and see if I can get it to come back. I figured out the second part. I was dorking around and turned the gomori mod off in-game, and the mod menu totally disappears from the menu. The only way to re-enable it if you do that is to go into OMORI/www/save/mods.json and change the value for the "gomori" key back to "true", then save the file and re-start the game.
2025-04-01T04:10:27.143531
2024-03-04T19:11:26
2167556539
{ "authors": [ "benzaied", "nbransby" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:14281", "repo": "GitLiveApp/firebase-kotlin-sdk", "url": "https://github.com/GitLiveApp/firebase-kotlin-sdk/issues/476" }
gharchive/issue
Hello,Thanks for your beautiful work,I start using it, and there IS a problem only on ios when i use this code. Référence.orderbykey().endAt(somekey).limittolast(...)It gaves this error"you must use queryendingatvalue: instead of query endingat value:child key: when using queryorderedbykey"Add [class name].[function name] to [library name] for [platform names] Library Class Member Platforms e.g auth e.g FirebaseAuth e.g signInWithGithub e.g Android, iOS please supply the stacktrace for the error and some example code?
2025-04-01T04:10:27.243794
2023-08-01T19:15:18
1831875767
{ "authors": [ "f0ssel", "greyscaled", "universalmind303", "vrongmeal" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:14332", "repo": "GlareDB/glaredb", "url": "https://github.com/GlareDB/glaredb/issues/1454" }
gharchive/issue
regression(cli): unable to exit cli app via ctrl+c Context prior to 4f1b4bada0bcb4d0d243b23de826962fd8a8c551 i was able to exit the cli app by pressing ctrl+c on an empty line. Now i can only exit via ctrl+d. Many applications allow exiting via ctrl+c as well as ctrl+d, not sure why we removed it. 4f1b4bada0bcb4d0d243b23de826962fd8a8c551 Looks like it'd be here? https://github.com/GlareDB/glaredb/commit/4f1b4bada0bcb4d0d243b23de826962fd8a8c551#diff-5368adac45277adc59dc377c7f408af46683933b6d569faa4e378486754ba288R204 I also removed this (and committed it in the commit Grey mentioned). I like to make space when working on the shell by hitting Ctrl-C. Working with the local command still feels like a shell and the same behaviour. I am unsure if other REPL applications allow exiting with Ctrl-C. It's usually with an exit command or Ctrl-D. For example: bash, zsh: exit or Ctrl-D python: exit() or Ctrl-D psql: exit or Ctrl-D I did find sqlite3 exiting with .exit / Ctrl-C / Ctrl-D but Ctrl-C didn't clear the scratch as well. Just had to hit it a couple of times in order to exit. But this is my opinion. Not against exiting on Ctrl-C when the scratch is empty. Just find myself changing it when working. TBH, psql doesn't exit on CTRL+C, it wants \q I think it's totally reasonable to have this behavior in the REPL but I think it would be best to confine this behavior to only that local session. For running the servers etc I would expect ctrl+c to work, and if we aren't listening for that signal there's a chance we don't exit correctly in deployments and wait to get force killed (idk if we have that problem, just observations I've seen before). We have a different Ctrl-C handler in server mode, so that should be ok: https://github.com/GlareDB/glaredb/blob/baa4be3d29398e13d837c13e1de3ca75ff416905/crates/glaredb/src/server.rs#L107-L142 TBH, psql doesn't exit on CTRL+C, it wants \q or exit or CTRL+D while i know we fully support the pg-wire protocol, I don't think that should necessarily mean we should look to psql for inspiration. It's a much older tool. Many more modern repls support ctrl+c (duckdb, polars-cli, node, ...), and no real "standard" on when it is/isn't best practice. So with that in mind, i think it's really just a matter of preference if we want to support it as a means of exiting or not. I prefer it, but if there is a consensus to use explicit exit commands (\q, CTRL+D) then that's totally fine. We have a different Ctrl-C handler in server mode, so that should be ok: I just tested 0.3.0 when running ./glaredb server and Ctrl-C works as expected. Running cargo run --bin glaredb server --ignore-auth on main (as of right now), there was a point in time where Ctrl-C didn't gracefully exit, but now it seems to: $ cargo run --bin glaredb server --ignore-auth Compiling termcolor v1.1.3 Compiling mysql-common-derive v0.30.2 Compiling mysql_common v0.30.6 Compiling mysql_async v0.32.2 Compiling datasources v0.3.0 (/home/greyb/Repos/glaredb/crates/datasources) Compiling sqlbuiltins v0.3.0 (/home/greyb/Repos/glaredb/crates/sqlbuiltins) Compiling sqlexec v0.3.0 (/home/greyb/Repos/glaredb/crates/sqlexec) Compiling metastore v0.3.0 (/home/greyb/Repos/glaredb/crates/metastore) Compiling pgsrv v0.3.0 (/home/greyb/Repos/glaredb/crates/pgsrv) Compiling glaredb v0.3.0 (/home/greyb/Repos/glaredb/crates/glaredb) Finished dev [unoptimized + debuginfo] target(s) in 3m 53s Running `target/debug/glaredb server --ignore-auth` 2023-08-02T15:44:27.399620Z INFO main ThreadId(01) logutil: crates/logutil/src/lib.rs:109: log level set set_level=INFO 2023-08-02T15:44:27.399679Z INFO main ThreadId(01) glaredb: crates/glaredb/src/bin/main.rs:162: starting... version="0.3.0" 2023-08-02T15:44:27.400975Z INFO main ThreadId(01) glaredb::server: crates/glaredb/src/server.rs:41: ensuring temp dir env_tmp="/tmp" 2023-08-02T15:44:27.401032Z INFO main ThreadId(01) metastore::local: crates/metastore/src/local.rs:14: starting in-process metastore 2023-08-02T15:44:27.401262Z INFO server-thread-12 ThreadId(14) metastore::srv: crates/metastore/src/srv.rs:38: creating new Metastore service with process id process_id=ec6bd640-00e6-41c8-8973-03da77aa3bc4 2023-08-02T15:44:27.401523Z INFO main ThreadId(01) glaredb::server: crates/glaredb/src/server.rs:54: skipping telementry initialization 2023-08-02T15:44:27.401637Z INFO main ThreadId(01) glaredb::server: crates/glaredb/src/server.rs:101: GlareDB listening... ^C2023-08-02T15:46:42.017008Z INFO server-thread-0 ThreadId(02) glaredb::server: crates/glaredb/src/server.rs:109: shutdown triggered 2023-08-02T15:46:42.017113Z INFO server-thread-0 ThreadId(02) sqlexec::background_jobs: crates/sqlexec/src/background_jobs.rs:144: close signal received, waiting for all background jobs to complete 2023-08-02T15:46:42.017151Z INFO server-thread-0 ThreadId(02) sqlexec::background_jobs: crates/sqlexec/src/background_jobs.rs:164: all background jobs completed 2023-08-02T15:46:42.017285Z INFO main ThreadId(01) glaredb::server: crates/glaredb/src/server.rs:148: shutting down greyb ~/Repos/glaredb (main) $
2025-04-01T04:10:27.252545
2024-03-26T20:38:39
2209279242
{ "authors": [ "PiZZAD0X", "joewhite94" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:14333", "repo": "Global-Conflicts-ArmA/Olsen-Framework-Arma-3", "url": "https://github.com/Global-Conflicts-ArmA/Olsen-Framework-Arma-3/pull/289" }
gharchive/pull-request
Feature/suppression stances Adds a suppression check to the stance state machine, and refactored the stance state machine in a way that makes some more sense based on my understanding of SMs in general. Used this in a mission recently and it seemed to result in more survivable static AI and longer fights with them. Uses the suppression value that is in the game already. Tends to work despite some cases where the AI ignores unit pos setting entirely, I decided not to bother working around them. New Check_Suppression state that is entered when AI suppression value exceeds a configurable threshold, moves them down a stance Configurable suppression resistance, applied toward the end of the onSESuppressionCheck function, so the Check_Suppression state can be briefly ignored. Should result in AI that still pop up to take pot shots regardless of suppression. Changes to existing behavior: Removed the state entered function from the unit checks state. As this state is entered regularly it was previously resulting in the AI bouncing between conflicting unit pos instructions when targetting or suppressed. Replaced the onSEResetStance function with the one from onSEUnitChecks. This way Static AI without a target or suppression will still behave in the same way as before, despite the above change. Up to you if you'd want this in dev or the commanderAI branch, or if there's not an appetite for this feel free to close. Good work, and yeah the AI in later A3 patches really hates the unit pos commands, there seems to be more low level stuff acting on them to keep them in stances. https://github.com/CBATeam/CBA_A3/blob/master/addons/statemachine/CBA_FSMEditor.cfg If you want to dabble with the statemachines more. https://github.com/CBATeam/CBA_A3/blob/master/addons/statemachine/CBA_FSMEditor.cfg If you want to dabble with the statemachines more. Good to know, thank you
2025-04-01T04:10:27.338238
2024-04-11T19:52:14
2238449377
{ "authors": [ "FrostyCoolSlug", "Noctunus", "misterpyrrhuloxia" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:14334", "repo": "GoXLR-on-Linux/goxlr-utility", "url": "https://github.com/GoXLR-on-Linux/goxlr-utility/issues/162" }
gharchive/issue
GoXLR App's GUI won't open I just installed the latest available version of the utility. During installation, I set the UI Handler to Browser. I then started up the utility from the start menu (I'm on Windows 11). Then in the browser-based utility I changed the setting System > Utility Settings > UI Handler from Browser to App. Then I closed the utility from the system tray and re-launched the utility again from the start menu. But the App never launches. The utility is visible in the system tray, but the App's window never opened. I then tried to right-click on the system tray icon > Configure GoXLR and yet the App never opened or became visible. So I re-opened the browser-based utility and changed the UI Handler back to Browser. Does anyone have any clue as to why the XLR App itself won't open while I have the UI Handler enabled? Did the app work fine with the previous version of the utility? I've been able to reproduce the problem (although it seems to be wildly inconsistent), I've re-released the 1.1.0 installer with a fix for the problem. Could you redownload the installer from here, reinstall, then give it another try please? I guess that's the reason why the v1.1.0 vanished from the releases? Btw the winget release is still there and when using wingetui it regularly tries to install the version which is not there. Just wanted to mention that - maybe while the v1.1 is being revised it would be better to also remove the winget version to reduce confusion on this end. You're correct, due to this issue (which ended up being more widespread than originally thought), 1.1.0 has been pulled until it can be solved. We have a PR open with winget to remove the version from there, but we're waiting for acceptance: https://github.com/microsoft/winget-pkgs/pull/148856 Back to the original issue, 1.1.1 has been released which contains a fix for this problem. It should all work properly again now. Thanks! Closing as complete. @FrostyCoolSlug, thanks for looking into this. I just updated from version 1.1.0 to 1.1.1. During installation I set the UI Handler to Browser. After installation, I opened launched GoXLR-Utility, opened the utility in the browser, went to the settings, and changed the UI Handler from Browser to App. Then I completely closed GoXLR-Utility from the system tray, confirmed that both goxlr-utility-ui.exe and goxlr-daemon.exe had both stopped, and then launched the utility again from the start menu. the utility started in the system tray but the utility's App GUI never opened. I then right-clicked the system tray icon and clicked Configure GoXLR but still nothing happened. I then went back into the Browser GUI to make sure that the UI Handler setting was still set to App and it was. So by all appearances, the upgrade from version 1.1.0 to 1.1.1 made no difference at all for me. Ok, I'll reopen this.. do you have a regular install of Windows, or have you run a tool that strips out various features (often referred to as 'bloat')? The only thing I've done to de-bloat is forcefully remove Edge and Edge Webview (or something like that). I used CrystalIdea's Uninstall Tool to do so. That's unfortunate, the Utility's app uses Webview to render the UI. There are two reasons for this decision: Using a browser that's expected to be present on an OS removes the need to bundle one with the Utility, massively reducing file size.. Important security updates (as well as performance improvements) are handled with OS updates, saving the need to have to repeatedly issue utility releases to keep things up-to-date. Unfortunately there's really nothing I can do to fix this that would result in a satisfactory conclusion, as a single Dev the overheads and management of bundeling and managing a browser to display the interface are just too high. Given that Edge (or at least) WebView are 'expected' components of Windows, I can't really support edge cases where people are forcefully removing them. Sorry I couldn't give a better answer. I can't really support edge cases where people are forcefully removing them. No pun intended?🤣 I understand, though. Thank you for your help, sir.🫡 Do you think you'd consider adding a notice to the main Readme stating that Webview is a requirement if one desires to use the app on Windows? I manually reinstalled Webview2 and now the GoXLR Utility's App gui works perfectly. Thanks for your help! Do you think you'd consider adding a notice to the main Readme stating that Webview is a requirement if one desires to use the app on Windows? I honestly don't think that's necessary. Webview is a required, and default, component of Windows. I consider people removing it as people who are intentionally breaking their Windows installation, and it's ultimately not my responsibility to make sure they don't do that. People running tools which remove critical components from their windows installation should do so with the understanding that it may cause problems with certain applications, which in this case it does. The utility installer tries to install Webview2 as part of the process, so these tools are not only removing webview, but leaving it flagged as installed, preventing the installer's attempt to fix it from correctly running. This isn't my problem, and I don't intend on making it my problem, so closing this as complete.
2025-04-01T04:10:27.344598
2024-09-18T15:45:30
2534098039
{ "authors": [ "Gramps", "granitrocky" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:14335", "repo": "GodotSteam/GodotSteam", "url": "https://github.com/GodotSteam/GodotSteam/issues/486" }
gharchive/issue
Using Double Precision with precompiled Binaries CTD Describe the bug When I launch the editor after installing the addon, it immediately crashes to desktop. To Reproduce Use a build with double precision enabled Install the addon Immediate crash. Version of Godot: Latest Master Version of GodotSteam: Version: 4.10 Seems related to https://github.com/godotengine/godot/issues/88358 I should also note that building the extension as a module directly into godot works, so maybe this isn't worth investigating. Hey there! Yeah, we don't compile with double precision currently so that makes sense. I wasn't aware they were making it so that should work. I need to go back over that thread. It is relatively easy to produce builds though; could be added in at some point.
2025-04-01T04:10:27.352645
2021-03-07T00:38:39
823789065
{ "authors": [ "Gogo1951", "bindi" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:14336", "repo": "Gogo1951/GogoLoot", "url": "https://github.com/Gogo1951/GogoLoot/issues/53" }
gharchive/issue
Gogoloot announces autoneed on BoEs even when it's disabled I was in a ZG run where the loot is changed from master loot to group loot back and forth, and every time group loot is enabled, GogoLoot announces that I am autoneeding on BoEs even though I specifically have that setting disabled. You aren't on latest. Make sure you update your addons daily. https://www.getajour.com/ I'm using CurseBreaker and specifically made sure that i'm up to date - but that's from CurseForge though. It supports Github as well, but they are the same version as far as the numbering tells me? You aren't on the latest. 1.6 is the latest. https://www.curseforge.com/wow/addons/gogoloot https://github.com/Gogo1951/GogoLoot/releases/tag/v1.6 - 28 days old. Up-to-date │ GogoLoot by aerorocks99, gogo913 │ GogoLoot-v1.6.zip Mate, you aren't on the latest. Don't know what else to tell you. Use Ajour or WoWUp and tell me what it comes back with. In my opinion, anyone not using Ajour is doing it wrong. Ajour has backups and a bunch of great features to keep your addons safe. They do it right. No clue about the tool you used, but I wouldn't trust it given it tells you that you're on 1.6 but aren't actually. C:\Program Files (x86)\World of Warcraft_classic_\Interface\AddOns\GogoLoot\GogoLoot.toc Does it say "## Title: |cFFFFFFFFGogoLoot|r|cFF00FF00 v1.6|r"? Yes. I just ran force update command to update the addon and I verified what you asked in the previous post, it's still announcing I'm autoneeding on BoEs (option still disabled). I haven't got a clue how that tool works, but I suspect poorly. Can you use Ajour? Send me a screenshot of your addons folder? Do you have duplicate versions installed?
2025-04-01T04:10:27.362172
2023-01-19T11:10:17
1548950186
{ "authors": [ "AmrithVengalath", "Etaliya", "Parmar-Bansi", "Suresha-Manjunatha", "fboch25", "lucasmachadoalvino", "prashant487859", "sandeep-myfriday", "white1984j", "yqz0203" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:14337", "repo": "GoldenOwlAsia/react-native-twitter-signin", "url": "https://github.com/GoldenOwlAsia/react-native-twitter-signin/issues/198" }
gharchive/issue
JavaScript is not available I am getting error on Android device. JavaScript is not available. facing the same issue do any one of you able to solve the issue, i am also facing same issue, any solution for this ? I was facing the same error when I pressed authenticate button in Twitter web view, but I found that was not providing a twetter login credential that why facing this issue. I'm also facing this issue it was working fine before. Hey is there a fix to this? I have tried so many different solutions. Works perfectly fine on iOS and does not work for android ... @prashant487859 I found that this error appears when you enter the wrong email or password. Still happen even though entering the right password 😭 @prashant487859 I found that this error appears when you enter the wrong email or password. I hate who did this Still happen even though entering the right password 😭 I tested and found only logining with mobile number works. I tested and found only logining with mobile number works. how..?? can you guide me same error here any solution ??
2025-04-01T04:10:27.394467
2017-02-01T15:49:55
204621078
{ "authors": [ "VIM-Arcange", "pavelit", "serejandmyself", "t3ran13", "tomarcafe" ], "license": "mit", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:14338", "repo": "GolosChain/tolstoy", "url": "https://github.com/GolosChain/tolstoy/issues/276" }
gharchive/issue
Front-end image cache prevent update If I use the following URL in a post: <EMAIL_ADDRESS>the image is resized by the front-end but also cached It means that if the target image change, it is not updated in the post :( how long is the cache duration (if any)? Is it possible to reduce/define the cache timeout? We could prefix the URL with something like https://imgp.golos.io/[width]x[heigth]x[cache timeout in minutes]/ I need it too none yet. considering x[cache timeout in minutes] - there are far more things we can enhance there.. i think we d have to redefine resize/crop behaviour of imgp.. it s strange.. i do not get it although modified it :) at the same time, i think using proxy url parametrization is not good idea. any chances that when original image url is opened it already has expiration header? i got Content-Type: image/png Last-Modified: Mon, 23 Jan 2017 22:08:04 GMT Accept-Ranges: bytes ETag: "a13a9b2bc575d21:0" Server: Microsoft-IIS/10.0 X-Powered-By: ASP.NET Date: Fri, 03 Feb 2017 16:56:41 GMT Content-Length: 31610 in response to curl -IXGET<EMAIL_ADDRESS>(not curl guru, just googled) so - good we have ETag here - but this turns into mnual check - like you said, by specifying cache exp in url --- which IS a mess considering how proxy works. or you re planning to just enter full link with cachexp setting by hand? i.e. kind of 'command string hack' but rather 'url hack'? i beieve it s better to teach imgp to respect Expires header and Cache-Control: max-age header. not an expert again, but googled this one - https://stackoverflow.com/questions/7549177/expires-vs-max-age-which-one-takes-priority-if-both-are-declared-in-a-http-resp. won't move further without @VIM-Arcange ; feedback :) Thanks for your reply. In my opinion, a simple check against the Last-Modified date will do the trick. If date is newer, don't use cached image and update with new one. i did not understand that. proxy saves image to cache, and that image has last-modified... or proxy issues HEAD request and checks last-modified against caching timestamp? how often? per image? at best, this gives uust cache that invalidates 'eventually'.... this can be done, of course..yet i think we could spend some more time on that.. so, using http headers for cache control seems hard-to-implement at original image' storage? agreed, end user probably won't ever control that.. i see that s a thing to solve - but not sure what s best to do.. so your opinion is to have cached image check interval specified by imgp url parameter, and then check these images on schedule vi head+get or get requests? do we have default cache outdate interval then? what do we do when image disappears at origin? remove from cache? so your opinion is to have cached image check interval specified by imgp url parameter Yes, if there is no default If you implement default check interval (let say every 1 hour), then an additional parameter is not required and then check these images on schedule vi head+get or get requests? Yes do we have default cache outdate interval then? It would be even simpler what do we do when image disappears at origin? remove from cache? I would not change the current behavior. Any new on this issue? More and more users are embedding their awards from Доска Почета, and those don't get refreshed. Unforunatly not. Currently we are not taking any isseues, until we finish with the new web client updates @VIM-Arcange i m sorry forgot about that.. soon! @serejandmyself - it s on imgp side solely - so far refresh will happen only for specific image urls, not available in wysiwyg editor mode current state: implemented new urls : same as /200x100/ you can use /200x100t45/ which is planned to mean "cache valid for 45 minutes" (minutes as best time unit is a guess). so far i test locally, and i see information about cache timeout is retrieved when we hit the cache next time. yet, it s not used so far, only written to console.. pushed changes to imgp.golos.io url structure as above (i.e. can add 't9000' after '/100x500' to request 150 hours cache lifetime) now some modification is needed at webclient side - currently posts/comments only store original url, and webclient wraps them with proxy, but it only uses /0x0/ (keep same) and /256x128/ and /640x480/ prefixes. also, it won't help pasting url already proxied with imgp - it will be double-proxied, with last hop still looking like /0x0/ or /256x128/ and not caching. however, you can check by curl or browser that feature works as intended note that /100x500t0/ means cache will work - i.e. store image - but on next request that cache will be removed (as >0 minutes passed). @VIM-Arcange , this also means you can call manually https://imgp.golos.io/0x0t0/http://url.here.png to reset cache for given image. i.e. while image still will load as /0x0/ at webclient - its cached contents will be reset upon 't' call. @tomarcafe Thanks for feedback and quick fix. I tested it in my latest posts and it looks like it is working fine. Excellent work! I am reopening this issue because the current implementation does not work. 2 days ago, I changed the image I use for myself on golosboard.com with this one <EMAIL_ADDRESS> It is a whale "swimming" toward right If you check my last post (https://golos.io/golosboard/@arcange/doska-pocheta-obnovlenie-3), the footer include the following image link: [![](https://imgp.golos.io/100x80t1440/http://golosboard.com/@arcange/level.png)](http://golosboard.com/board.html?user=arcange) golos.io still display the old image (a whale "swimming" toward left), even if the timeout has expired. To reproduce the problem: create a new post copy<EMAIL_ADDRESS>in the body If you see a left-swimming whale, it means golos.io still retrieve image from cache Things working correctly now. Closing the issue. It looks like the proxy timeout is broken again. <EMAIL_ADDRESS>The above link in a post display a broken image When changing the timeout value, the image is displayed correctly ex<EMAIL_ADDRESS> maybe because timeout in minutes is counted since cache update? Browsing<EMAIL_ADDRESS>should at least return an image. It does not return anything. just checkrd, it returns whale swimming left You are right! I tested it again, and indeed it works if I do a browser refresh (CTRL-F5) I have to do the same on all the post I published previously. I had not this problem two days ago. You can close the issue, if you think it is a client side only problem. let s just try to figure out how we can use it.. my best guess is simply calling /0x0t0/ or so to reset cache immediately. also cache freshness check is done upon next request, so passing t1440 not guarantees next cache reload will happen 1 day after. it only tells imgp to reset cache it it is older than 1440.. You need to add some parameter to the end of the url address. Old image cached by image proxy. https://imgp.golos.io/0x0/http://distrowatch.com/images/cgfjoewdlbc/rosa.png New image by old url and some parameter https://imgp.golos.io/0x0/http://distrowatch.com/images/cgfjoewdlbc/rosa.png?0
2025-04-01T04:10:27.403670
2024-11-15T21:25:28
2663179064
{ "authors": [ "GitHubUser53123", "HenhenIII", "JoeHammad1844", "a-person5660", "gluesniffler", "whateverusername0" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:14339", "repo": "Goob-Station/Goob-Station", "url": "https://github.com/Goob-Station/Goob-Station/pull/863" }
gharchive/pull-request
Salv about to be rushed by every cargo tech round start, what does it change when random cargo techs don't interact with the station all shift and when salvage doesn't interact with the station. Just an unnecessary removal of content. Like that one PR removing paramedic because med could do his job. ⭐ 🎆 removes salvage doesn't increase cargo tech jobs to compensate doesn't give cargotechs salvage access barely elaborates
2025-04-01T04:10:27.410440
2023-07-01T20:19:16
1784202216
{ "authors": [ "aaronjamt", "michael-r-elp", "rcelyte", "roydejong" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:14340", "repo": "Goobwabber/MultiplayerCore", "url": "https://github.com/Goobwabber/MultiplayerCore/pull/45" }
gharchive/pull-request
1.31 Support This PR adds support for Beat Saber 1.31: adds DisableSsl patch to bypass SSL / encryption / certificate checks on Ignorance/ENet connections minor code change for 1.31 compatibility (environmentInfos) bumps MpCore version to 1.5 I've made some changes to this and the BeatTogether client PR to address feedback re: encryption and settings. Regarding the SSL setting: MultiplayerCore / BeatTogether: You can now pass a use_ssl bool in multiplayer status data (default: false). MultiplayerCore stores this but cannot enforce it, that is currently the responsibility of mods that select/override servers like BeatTogether. MultiplayerCore now has a MpStatusRepository utility that allows other mods like BeatTogether to practically access this information and receive update events. BeatTogether will now automatically update its configuration based on the multiplayer status data. If your master server reports "use_ssl":true then BeatTogether will remember and apply that setting. ServerBrowser, unrelated to these changes: Will always detect & report encryption mode per lobby, and apply it when connecting. I take issue with having defaults incompatible with Official servers. Protocol extensions like this should build on the source seamlessly, not requiring hard-coded edge cases like if(isOfficial) where things defer. I don't disagree from a purist perspective. But pragmatically speaking, this default makes the most sense to me for third-party servers as I expect they will generally not do DTLS. Extended server status data is not used for official servers at all, because all mods do indeed have special handling for official. I think "if official, disable modded extensions" type logic will always exist and is not unreasonable. Flipping the default would have no effect except that it might be "more correct" and would require explicit opt-out from modded servers. Modded servers already have an explicit opt-out. The only status responses not containing a use_ssl field would be servers that either expect vanilla behaviour or never supported ENet in the first place (i.e. outdated instances). I've further updated this PR to 1.34, when will this be merged? I don't want to submit my own PR and take @roydejong's credit for their code until this PR has been merged. @roydejong would it be fine if we merge this into our dev branch instead of main, then we can open a new PR for 1.34.0 Sounds good, merging into dev
2025-04-01T04:10:27.412280
2014-07-09T18:07:24
37492507
{ "authors": [ "Fleker", "beaufortfrancois" ], "license": "apache-2.0", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:14341", "repo": "GoogleChrome/chrome-app-samples", "url": "https://github.com/GoogleChrome/chrome-app-samples/issues/263" }
gharchive/issue
IOIO Example Doesn't Work As in the previous issue #262, I downloaded the IOIO Sample from the Web Store and there's an immediate bug which prevents the app from going further: Uncaught TypeError: Cannot read property 'getDevices' of undefined main.js:86 Google Chrome 37.0.2062.3 (Official Build 279868) dev-m OS Windows From http://crbug.com/392654 We don't have any plans at this time for extending the chrome.bluetooth JS API, or supporting LE Peripheral mode on Chromebooks
2025-04-01T04:10:27.426671
2018-07-04T09:37:09
338202315
{ "authors": [ "addyosmani", "guar47" ], "license": "apache-2.0", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:14342", "repo": "GoogleChrome/essential-image-optimization", "url": "https://github.com/GoogleChrome/essential-image-optimization/pull/92" }
gharchive/pull-request
Change image path to local Hey @addyosmani. There are the changes for the images from cloudinary to a local path. I also change the name of the images by their relative size. @googlebot I signed it! This looks great. Awesome work! While scrolling through both in desktop mode and emulating an iPhone in DevTools, I did notice one bug: Appears we might have a missing image or reference that needs a fix. Could you take a look please, @guar47? @addyosmani Thank you for review! Yes, indeed it was bug with file extension. Fixed this.
2025-04-01T04:10:27.427664
2016-05-14T18:27:41
154868575
{ "authors": [ "ianvollick" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:14343", "repo": "GoogleChrome/houdini-samples", "url": "https://github.com/GoogleChrome/houdini-samples/pull/6" }
gharchive/pull-request
Fix compositor driven scrolling for twitter ex This demo seems to expose a bug preventing compositor driven scrolling. Changing the z indices works around this issue. I signed it!
2025-04-01T04:10:27.460207
2018-02-20T00:28:41
298437245
{ "authors": [ "StephenxKim", "aslushnikov" ], "license": "apache-2.0", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:14344", "repo": "GoogleChrome/puppeteer", "url": "https://github.com/GoogleChrome/puppeteer/issues/2059" }
gharchive/issue
Page.goto is locking the html file. I am running puppeteer to run a local saved html file. But when page.goto is operating, it is locking the html file and the next process has problem accessing it. Please advise. Is there a way to run the page.goto as just read-only? Steps to reproduce Tell us about your environment: Puppeteer version: Platform / OS version: URLs (if applicable): Node.js version: What steps will reproduce the problem? Please include code that reproduces the issue. What is the expected result? What happens instead? Page.goto is locking the html file. I am running puppeteer to run a local saved html file. But when page.goto is operating, it is locking the html file and the next process has problem accessing it. Please advise. Is there a way to run the page.goto as just read-only? -version 1.3 -rhel 7.4 -n/a url -latest Node.js @StephenxKim I believe chrome doesn't hold open file handles for the file:// urls. At least I can easily navigate browser to the file:// url and then access the file in my editor.
2025-04-01T04:10:27.463038
2019-06-08T11:12:03
453783109
{ "authors": [ "Kashio", "vsemozhetbyt" ], "license": "apache-2.0", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:14345", "repo": "GoogleChrome/puppeteer", "url": "https://github.com/GoogleChrome/puppeteer/issues/4549" }
gharchive/issue
Puppeteer getting element from elementHandle causing protocol error I'm trying to scrape a certain facebook page for its posts written by a certain user and starting with a certain word. const puppeteer = require('puppeteer'); async function findPosts(page) { const USERNAME = 'test123'; const posts = await page.$$('.userContentWrapper'); return posts.filter(async post => { try { let usernameElement = await post.$('.fwb'); let username = await page.evaluate(element => element.textContent, usernameElement); if (username === USERNAME) { let postElement = await post.$('[data-testid="post_message"] p'); let postContent = page.evaluate(element => element.textContent, postElement); return /\[test \d+\]/.test(postContent); } return false; } catch(e) { console.log(e); return false; } }); } (async () => { const browser = await puppeteer.launch({ headless: false }); const page = await browser.newPage(); await page.goto('https://www.facebook.com/groups/groupid/'); const pageTitle = await page.title(); console.log(pageTitle); const posts = await findPosts(page); console.log(posts); await browser.close(); })(); I'm getting Error: Protocol error (Runtime.callFunctionOn): Target closed. when I'm trying to get the usernameElement at this line: let usernameElement = await post.$('.fwb'); Not sure what's going wrong here, any suggestions? posts.filter() returns synchronously, without waiting for all the filter callbacks. See the explanation and possible solutions, for example, here: https://exploringjs.com/es2016-es2017/ch_async-functions.html#_async-functions-and-callbacks
2025-04-01T04:10:27.466999
2017-07-28T22:00:50
246469294
{ "authors": [ "AVGP", "haihoi2", "hekod777", "samuelli" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:14346", "repo": "GoogleChrome/rendertron", "url": "https://github.com/GoogleChrome/rendertron/issues/37" }
gharchive/issue
Clearing cache Should not clear on every instance. Clearing also needs to go through every cache option. Hi My team wants to use Rendertron to perform SSR for several of our sites. We'd like to see an endpoint that we can hit and clear the cache for all the pages of a site. if we can clear a specific page on a specific site that will be even better. Thanks! Does the endpoint need to be secure at all? We'd like to have it secured. Anyone outside my company should not be able to use the endpoint. We are using AWS at the moment, I'd like to know your opinion on how to implement the security. HI, anyone here update or follow this? The fundamental function is present, but there is currently no way to get to it. This would require a PR to set up a route to it. I would leave the authentication out of Rendertron as that's something that is best dealt with on the level of whatever infrastructure is making the calls to Rendertron (i.e. the reverse proxy or whatever other server is in front of Rendertron) Right now, Rendertron 3.0.0 provides the /invalidate/<url> endpoint to remove individual URLs from a cache. I think having an endpoint to clear the entire cache would be useful. Are you interested in implementing this feature? This landed in 3.1.0
2025-04-01T04:10:27.473394
2017-03-14T22:53:58
214230779
{ "authors": [ "addyosmani", "gauntface", "jeffposnick" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:14347", "repo": "GoogleChrome/sw-helpers", "url": "https://github.com/GoogleChrome/sw-helpers/issues/321" }
gharchive/issue
registerNavigationRoute() should default to looking in the precaching cache Library Affected: sw-lib https://github.com/GoogleChrome/sw-helpers/pull/307 added support for swLib.registerNavigationRoute(), and it uses caches.match(url) under the hood to return the response. It would make sense to change that to default to caches.match(url, {cacheName}), where cacheName defaults to this._revisionedCacheManager.getCacheName() but can be changed via an option if the developer wants something else. It's not crucial for the hackathon, but it can work around an issue when upgrading from sw-precache, when there might still be an older entry with the same URL in one of the caches that sw-precache had created. Adding to the hackday label. I ran into this last night when trying to use registerNavigationRoute() but it's great that you've already been thinking about modifying the behavior to look in the precaching cache. Yup, the default would be to look explicitly in this._revisionedCacheManager.getCacheName(), which is the cache used for precached assets. But we'd also want to allow for overriding that default if a developer needs to. Why is caches.matches bad? In practice, I ran into issues when updating a site that previously used sw-precache and moved to sw-helpers. The caches.match() kept returning the response from the cache that sw-precache had created, because it will just return the first thing it finds. Being specific by letting folks specify a cache name and defaulting to the most likely choice works around that, and it seems like a better approach in general. Understood. So what happens if I were to lazy cache paths in the runtime cache? That won't ever be used for navigation routes if we only look in the precache cache right? In that case, the developer would need to set the cacheName option to override the default. I don't know that what you describe a use case we particularly want to encourage, but it's possible. The scenario where this would be desirable would be jakes wikipedia example where you cache pages for offline use. But it's an advanced example. I might be missing some nuance of Jake's example, but I think in that case, a route that just did traditional runtime caching would be fine. Such a route wouldn't explicitly check for request.mode === 'navigate', but it could still match the URLs that are being navigated to. Or if you wanted to use NavigationRoute in that scenario, you could, and just pass in an appropriate cacheName.
2025-04-01T04:10:27.499336
2022-04-11T04:52:26
1199389683
{ "authors": [ "Pika-Pool", "SJShip" ], "license": "apache-2.0", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:14348", "repo": "GoogleChromeLabs/bubblewrap", "url": "https://github.com/GoogleChromeLabs/bubblewrap/issues/668" }
gharchive/issue
App opens like a link with url bar at the top, rather than like an android app Describe the bug After installing the generated apk, the app opens like a normal link in the Google app rather than an android app. It has a url bar at the top To Reproduce Steps to reproduce the behavior: use bubblewrap cli to generate apk using the following manifest: Note: The links are fake // twa-manifest.json { "packageId": "app.web.facpro.twa", "host": "facpro.web.app", "name": "FACPRO - Faculty Research Progress", "launcherName": "FACPRO", "display": "fullscreen", "themeColor": "#FFFFFF", "navigationColor": "#000000", "navigationColorDark": "#000000", "navigationDividerColor": "#000000", "navigationDividerColorDark": "#000000", "backgroundColor": "#FFFFFF", "enableNotifications": true, "startUrl": "/", "iconUrl": "https://facpro.web.app/logo512.png", "maskableIconUrl": "https://facpro.web.app/maskable_icon_x512.png", "splashScreenFadeOutDuration": 300, "signingKey": { "path": "/path/to/android.keystore", "alias": "android" }, "appVersionName": "1", "appVersionCode": 1, "shortcuts": [], "generatorApp": "bubblewrap-cli", "webManifestUrl": "https://facpro.web.app/manifest.json", "fallbackType": "customtabs", "features": {}, "alphaDependencies": { "enabled": false }, "enableSiteSettingsShortcut": true, "isChromeOSOnly": false, "orientation": "default", "fingerprints": [], "additionalTrustedOrigins": [], "retainedBundles": [], "appVersion": "1" } //manifest.json of website { "short_name": "FACPRO", "name": "FACPRO - Faculty Research Progress", "icons": [ { "src": "favicon.ico", "sizes": "64x64 32x32 24x24 16x16", "type": "image/x-icon" }, { "src": "logo192.png", "type": "image/png", "sizes": "192x192" }, { "src": "logo512.png", "type": "image/png", "sizes": "512x512" }, { "src": "maskable_icon_x512.png", "type": "image/png", "sizes": "512x512", "purpose": "any maskable" } ], "start_url": ".", "display": "standalone", "theme_color": "#ffffff", "background_color": "#ffffff" } Expected behavior The app should open like an normal android app, without the url bar Screenshots Screenshot of app with url bar (in android emulator) Screenshot of app with url bar (in physical device OnePlus 7) Desktop OS: Fedora 35 workstation Browser: Chrome Version: 100 Smartphone (please complete the following information): Device: OnePlus 7 OS: Android 11 Browser: chrome Version: Android Chrome 100.0.4896.79 Additional context Additionally, the PWA works great when installing from the browser using the install app option from the website. It opens like a usual android app, without the url bar Yes, the screenshot is on an emulator, but the same thing happens on my OnePlus 7 too. I'll add a screenshot for that too @Pika-Pool , A Trusted Web Activity needs the origins being opened to be validated using Digital Asset Links, in order to show the content in full-screen (without URL bar). Did you provide the /.well-known/assetlinks.json endpoint for your root domain and subdomain with correct assetlinks? @SJShip, I just checked it, and I had names the assetlinks.json file as assetslinks.json. I've wasted so much time on this because of a small spelling mistake. Unbelievable.🤦🤦 Thanks a lot for pointing it out.
2025-04-01T04:10:27.501172
2021-07-23T06:51:05
951302005
{ "authors": [ "bwalderman" ], "license": "apache-2.0", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:14349", "repo": "GoogleChromeLabs/chromium-bidi", "url": "https://github.com/GoogleChromeLabs/chromium-bidi/pull/30" }
gharchive/pull-request
Make domain objects extend DomainImpl Rename DomainBase to DomainImpl to match name of CdpClientImpl. Update the type of CdpClient so that all of its domain properties also extend DomainImpl. This lets callers access the EventEmitter methods like removeListener which will be useful later for implementing things like unsubscribe. It looks like the PR is targeted not to main but to cdp_refactor_2. Was is intended to be so? Yes. I wanted to merge cdp_refactor_2 first and then I'll rebase this PR onto main.
2025-04-01T04:10:27.511530
2017-09-29T13:24:02
261640924
{ "authors": [ "SzasznikaJanos", "dj-4war" ], "license": "apache-2.0", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:14350", "repo": "GoogleCloudPlatform/android-docs-samples", "url": "https://github.com/GoogleCloudPlatform/android-docs-samples/issues/47" }
gharchive/issue
Speech Service reset for every 30 sec We have an application where we want to be in continuous listen for voice/speech. But as per the sample voice recorder has tor set for every 30 seconds. Is there a limitation on VoiceRecorder that it has to reset on certain time?, we want to have continuous voice recorder without reset? As a work around we we are reseting the resetting the service but we are getting lag in recognition. mRequestObserver.onNext(StreamingRecognizeRequest.newBuilder() .setStreamingConfig(StreamingRecognitionConfig.newBuilder() .setConfig(RecognitionConfig.newBuilder() .setLanguageCode(getDefaultLanguageCode()) .setEncoding(RecognitionConfig.AudioEncoding.LINEAR16) .setSampleRateHertz(sampleRate) .build()) .setInterimResults(true) .setSingleUtterance(false) // set this to false. .build()) .build()); On the startRecognizing function. The key is setSingleUtterance to false
2025-04-01T04:10:27.515453
2017-02-02T12:12:23
204856367
{ "authors": [ "andrewsg", "thomasschickinger" ], "license": "apache-2.0", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:14351", "repo": "GoogleCloudPlatform/appengine-sidecars-docker", "url": "https://github.com/GoogleCloudPlatform/appengine-sidecars-docker/pull/45" }
gharchive/pull-request
Reenable detect exception plugin and switch version to 0.0.5. v0.0.5 of the plugin has removed the upper bound on the fluentd version and is compatible with every 0.1x version of fluentd. LGTM
2025-04-01T04:10:27.518932
2018-09-27T06:00:36
364316476
{ "authors": [ "medb", "prashantgolash" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:14352", "repo": "GoogleCloudPlatform/bigdata-interop", "url": "https://github.com/GoogleCloudPlatform/bigdata-interop/pull/126" }
gharchive/pull-request
Support to add credentials via hadoop credential provider path Pull request to support reading credential secrets from Hadoop credential provider. Currently, this is one of the ways how S3 credentials are read (integrated into Hadoop main trunk). For details on Hadoop credential provider, please refer: https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/CredentialProviderAPI.html. The client will configure the credential in core-site like this: google.cloud.auth.hadoop.credential.provider.path <EMAIL_ADDRESS> I signed here. I signed it! @prashantgolash may you take a look if https://github.com/GoogleCloudPlatform/bigdata-interop/pull/129 addresses your use case as well? Closing as obsolete, feel free to re-open if you think that related functionality is still missing in GCS connector.
2025-04-01T04:10:27.528177
2015-04-21T19:22:01
69929673
{ "authors": [ "coveralls", "tmatsuo" ], "license": "apache-2.0", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:14353", "repo": "GoogleCloudPlatform/cloud-pubsub-logging-python", "url": "https://github.com/GoogleCloudPlatform/cloud-pubsub-logging-python/pull/11" }
gharchive/pull-request
Improved the real test coverages, autocreate the topic when it doesn't exist. Mainly test improvements. @gainward I'd like you to review this CL. I improved the test coverage (adding tests and removing pragmas). Also I realized that the thread safety code in get_pubsub_client() is not necessary, because pubsub_handler just serializes the input, and the API calls are blocking. Coverage remained the same at 100.0% when pulling ffd5fb30f114fde1ff8ed139fdfcfe137d927c29 on tmatsuo:improve-tests into f15770a91a2038c411961624d19c841de52cf7ac on GoogleCloudPlatform:master. The tests successfully run in local. I encrypted the service account JSON key with travis command, but in the travis container, the environment variables for decryption aren't injected. @gainward I think you did a similar setup in the past. Did you encounter a similar issue? Thanks! @gainward never mind. The secure environment variables are not available for builds by pull requests from another repo. So I need to move it to a branch in the same repo in order to the travis build to work. Sorry about it, but can we move the discussion to https://github.com/GoogleCloudPlatform/cloud-pubsub-logging-python/pull/12
2025-04-01T04:10:27.537746
2020-09-30T15:16:05
712039288
{ "authors": [ "EricEdens" ], "license": "apache-2.0", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:14354", "repo": "GoogleCloudPlatform/compute-image-tools", "url": "https://github.com/GoogleCloudPlatform/compute-image-tools/pull/1379" }
gharchive/pull-request
CLI testing: Support module and e2e tests This modifies the directory structure to support two types of tests that share common utility code. e2e tests are executed on pre-compiled binaries. They validate that modules are combined correctly, and that common scenarios work as expected. module tests exercise a single module, typically with live dependencies. They are typically faster than e2e tests, allowing for increased coverage at a lower cost. Prior to pushing this, I'll submit a PR to update [1], which will update the associated presubmits. https://github.com/GoogleCloudPlatform/oss-test-infra/blob/master/prow/prowjobs/GoogleCloudPlatform/gcp-guest/compute-image-tools.yaml#L116 Testing Built prowjobs_cloudbuild.yaml: https://pantheon.corp.google.com/cloud-build/builds/49e6f867-ba7a-4a0b-9758-c4332d1c11ff?project=edens-test Executed tests using the three test binaries (image import/export, OVF, and windows upgrade) Prow updates: https://github.com/GoogleCloudPlatform/oss-test-infra/pull/524 /retest Hmm. GitHub is keeping the old cli-tools-e2e-tests results. Closing this and opening a new PR.
2025-04-01T04:10:27.539305
2021-04-10T00:44:28
854938694
{ "authors": [ "mservidio" ], "license": "apache-2.0", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:14355", "repo": "GoogleCloudPlatform/datashare-toolkit", "url": "https://github.com/GoogleCloudPlatform/datashare-toolkit/issues/476" }
gharchive/issue
Error: Could not load the default credentials Cloud run service logs constantly throwing: Error 2021-04-09 20:30:14.732 EDTError: Could not load the default credentials. Browse to https://cloud.google.com/docs/authentication/getting-started for more information. at GoogleAuth.getApplicationDefaultAsync (/shared/node_modules/@google-cloud/bigquery/node_modules/google-auth-library/build/src/auth/googleauth.js:155:19) at processTicksAndRejections (internal/process/task_queues.js:85:5) at async GoogleAuth.getClient (/shared/node_modules/@google-cloud/bigquery/node_modules/google-auth-library/build/src/auth/googleauth.js:486:17) at async GoogleAuth.authorizeRequest (/shared/node_modules/@google-cloud/bigquery/node_modules/google-auth-library/build/src/auth/googleauth.js:527:24) This started occurring in two separate environments over the past month. Fixed in documentation: https://github.com/GoogleCloudPlatform/datashare-toolkit/commit/c2ddcb385fd36d3e3de37728862d5700a6d35773
2025-04-01T04:10:27.545772
2017-03-02T08:08:36
211317692
{ "authors": [ "SurferJeffAtGoogle", "ergunbilgehan" ], "license": "apache-2.0", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:14356", "repo": "GoogleCloudPlatform/dotnet-docs-samples", "url": "https://github.com/GoogleCloudPlatform/dotnet-docs-samples/issues/193" }
gharchive/issue
Google Speech To Text by Microfone I am working on Google Speech by c # . I want to use the microphone . My code is there But My code uses only audio files. Please help me . Thank you using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Threading.Tasks; using Google.Cloud.Speech.V1Beta1; namespace ConsoleApplication2 { class Program { static void Main(string[] args) { var speech = SpeechClient.Create(); var response = speech.SyncRecognize(new RecognitionConfig() { LanguageCode = "tr-TR", Encoding = RecognitionConfig.Types.AudioEncoding.Linear16, SampleRate = 16000, }, RecognitionAudio.FromFile("audio.wav")); foreach (var result in response.Results) { foreach (var alternative in result.Alternatives) { Console.WriteLine(alternative.Transcript); } } Console.ReadLine(); }}} Have you tried to build and run this sample? https://github.com/GoogleCloudPlatform/dotnet-docs-samples/tree/master/speech/api/Recognize Thank you for the help. Now I need to I will convert this code to windows form. Thank you for your help! . I use this code . Thank you :) 2017-03-04 1:08 GMT+03:00 Jeffrey Rennie<EMAIL_ADDRESS> Have you tried to build and run this sample? https://github.com/GoogleCloudPlatform/dotnet-docs-samples/tree/master/ speech/api/Recognize — You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/GoogleCloudPlatform/dotnet-docs-samples/issues/193#issuecomment-284083792, or mute the thread https://github.com/notifications/unsubscribe-auth/AYeHFcBejNgjsrADymbLFBC1QZgcyak7ks5riI9egaJpZM4MQq6w .
2025-04-01T04:10:27.552124
2024-06-28T17:54:00
2380908199
{ "authors": [ "grayside", "muncus" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:14357", "repo": "GoogleCloudPlatform/golang-samples", "url": "https://github.com/GoogleCloudPlatform/golang-samples/issues/4228" }
gharchive/issue
Prepare for container registry shutdown Golang testing is using Container Registry, which is deprecated and scheduled for shutdown in March 2025. golang-samples-tests hosts the images we use to run our testing pipeline, which will need to be moved to Artifact Registry (and the image names updated in the kokoro configs). Do you have a list of current image URIs?
2025-04-01T04:10:27.554901
2018-04-28T19:03:12
318660874
{ "authors": [ "jba", "pongad", "raja", "snktagarwal" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:14358", "repo": "GoogleCloudPlatform/google-cloud-go", "url": "https://github.com/GoogleCloudPlatform/google-cloud-go/issues/985" }
gharchive/issue
Speech Client support for using enhanced models (phone_call) Just curious what the status of updating the client to support the enhanced recognition models. I noticed the speech protobuf package has been updated with v1p1beta that seems to support this in the option struct. https://github.com/google/go-genproto/blob/master/googleapis/cloud/speech/v1p1beta1/cloud_speech.pb.go Client Speech @pongad, should we start generating these? Java is already doing it so I suppose we should too. https://godoc.org/cloud.google.com/go: When do the client libraries get generated? Is there a cron or something doing this? I am not seeing v1p1beta1 generated yet. redirects are published: https://godoc.org/cloud.google.com/go/speech/apiv1p1beta1
2025-04-01T04:10:27.557402
2017-05-08T04:03:48
226931919
{ "authors": [ "anuraaga", "coveralls" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:14359", "repo": "GoogleCloudPlatform/google-cloud-java", "url": "https://github.com/GoogleCloudPlatform/google-cloud-java/pull/2038" }
gharchive/pull-request
Remove project id requirement for creating a GCS client. Project id is only required for creating buckets, not operating on them, so it doesn't make sense to make it required for all cases. I currently have some code that only operates on an existing bucket, and literally set the project id to "devnull" to prevent users from having to provide a valid project id which is not necessary. Coverage increased (+0.0008%) to 80.883% when pulling 71d854a650646ac52416be230bb11ab747222894 on anuraaga:gcs_noproject into d70f69603963231ca2f31f66514d2cf4f689100c on GoogleCloudPlatform:master. @shinfan Shortened the PR title, though feel free to edit it further as you see fit. Don't have write access so will need someone to merge :)
2025-04-01T04:10:27.559850
2017-01-30T10:20:28
203971084
{ "authors": [ "danoscarmike", "ozomer", "stephenplusplus" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:14360", "repo": "GoogleCloudPlatform/google-cloud-node", "url": "https://github.com/GoogleCloudPlatform/google-cloud-node/issues/1942" }
gharchive/issue
pubsub documentation - mention that no-listeners means no-auto-acks. It's worth mentioning here that when a subscription is created with autoAck: true, and all 'message' listeners are removed, the subscription instance will stop acking new messages. i.e. messages are acked iff they are passed to at least one listener, and there is no need to worry about idle (zombie) subscription instances. This is a good feature - I was worried that it might not be implemented until I checked the code. @ozomer thanks for pointing this out. Would you like to create a PR to update the docs? You may open the PR yourself. Thanks. We're about to have a pretty big overhaul to the Pub/Sub API. This feature will not be implemented as originally planned. Please stay tuned for the PR / release of the new API.
2025-04-01T04:10:27.561632
2018-04-12T00:13:59
313532379
{ "authors": [ "chingor13", "tmatsuo" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:14361", "repo": "GoogleCloudPlatform/google-cloud-php", "url": "https://github.com/GoogleCloudPlatform/google-cloud-php/issues/1008" }
gharchive/issue
[Batch] shm_put_var(): not enough shared memory left when using multiple job runners We are using the default amount of shared memory for storing the job runner configs when requesting a shared memory chunk using shm_attach. A single GAE flex application with opencensus, debugger, logger, and a single pubsub topic runs out of space. We should request a larger chunk of shared memory for the job config. It sounds like a reasonable plan, but providing an injectable configuration would potentially need to bubble all the way up to the client constructor or even the ServiceBuilder config. The daemon also needs to know the custom values. Now I'm thinking to use envvar instead of config options on constructor.
2025-04-01T04:10:27.566096
2017-10-31T22:36:30
270140678
{ "authors": [ "dhermes" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:14362", "repo": "GoogleCloudPlatform/google-cloud-python", "url": "https://github.com/GoogleCloudPlatform/google-cloud-python/issues/4300" }
gharchive/issue
PyPI description for google-cloud-trace is Thank you for reporting an issue to google-cloud-python! If you are reporting an issue or requesting a feature, please search the existing open and closed issues to see if there is already work being done. https://github.com/GoogleCloudPlatform/google-cloud-python/issues http://stackoverflow.com/questions/tagged/google-cloud-python If you can provide us with as much of the following information as possible it will help us identify the cause of your issue more quickly. Specify the API at the beginning of the title (for example, "BigQuery: ...") General, Core, and Other are also allowed as types OS type and version Python version and virtual environment information python --version google-cloud-python version pip show google-cloud, pip show google-<service> or pip freeze Stacktrace if available Steps to reproduce Code example Using GitHub flavored markdown can help make your request clearer. See: https://guides.github.com/features/mastering-markdown/ It seems I hit a GitHub bug? #4301 was also created (but complete)
2025-04-01T04:10:27.578372
2020-09-10T18:48:31
698357848
{ "authors": [ "KonradSchieban", "dinvlad" ], "license": "apache-2.0", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:14363", "repo": "GoogleCloudPlatform/inspec-gke-cis-benchmark", "url": "https://github.com/GoogleCloudPlatform/inspec-gke-cis-benchmark/issues/5" }
gharchive/issue
Log permission errors Hi Team, Would it be possible to log any permission errors when running the benchmark? Currently, when we run it on a project and it returns zero findings, it's hard to tell if this is because the project is fully compliant, or if the benchmark SA simply didn't have access to some resources. This request is similar to https://github.com/GoogleCloudPlatform/inspec-gcp-cis-benchmark/issues/54 Thanks! Thanks @dinvlad for raising this issue. This will have to be implemented in the repo https://github.com/GoogleCloudPlatform/inspec-gcp-helpers . Once implemented, it will fix the issue for all dependent profiles (gke-cis, gcp-cis, gcp-pci, etc.)
2025-04-01T04:10:27.581263
2015-07-31T10:00:24
98357622
{ "authors": [ "alex-mohr", "jszczepkowski", "piosz", "socaa" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:14364", "repo": "GoogleCloudPlatform/kubernetes", "url": "https://github.com/GoogleCloudPlatform/kubernetes/pull/12076" }
gharchive/pull-request
Added http API skeleton server. Part of #11570 @k8s-bot ok to test @googlebot check again! In general very good. Please add different error messages, HTTP error codes for failure and comments for unimplemented parts. I think you should also add Dockerfile, as the consumer will run as a container. PTAL LGTM @k8s-bot test this [contrib/submit-queue: candidate for merging] Automatic merge from SubmitQueue
2025-04-01T04:10:27.585084
2015-04-04T01:45:23
66261761
{ "authors": [ "davidopp", "ddysher", "rjnagal" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:14365", "repo": "GoogleCloudPlatform/kubernetes", "url": "https://github.com/GoogleCloudPlatform/kubernetes/pull/6436" }
gharchive/pull-request
fix nodecontroller race #5535 introduces event recorder/broadcaser #6155 starts recording event in node probe Not sure if this fixes the problem... let's see Looks like both are failing for different reasons. What do you mean "Looks like both are failing for different reasons" ? whoops, sorry for the confusion. By both, I mean "travis" and "shippable". Last time I checked, travis is failing due to #6045; and shippable is failing due to shippable's internal server error. Will send a PR to remove event generation from NodeController shortly. @davidopp SGTM. As a stop-bleeding solution, we can remove event generation What are we going to do about this fix? Close it? We'll eventually generate events in nodecontroller, so I'd at least want to know what's causing the race. I merged PR #6443 yesterday which removes event generation from NodeController. Maybe we should just re-name this issue to "add events back to NodeController" so we don't lose the history? Sorry, I mean rename #6199, not rename this issue (this isn't an issue, this is your PR). @davidopp @ddysher do we still need this fix? Nope, this is fixed by #6443.
2025-04-01T04:10:27.666608
2018-08-20T22:23:55
352317937
{ "authors": [ "danisla", "enisoc", "luisdavim" ], "license": "apache-2.0", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:14366", "repo": "GoogleCloudPlatform/metacontroller", "url": "https://github.com/GoogleCloudPlatform/metacontroller/issues/76" }
gharchive/issue
Hot loop when using Recreate update strategy with Go hook As reported by @cohix, a lambda hook written in Go that generates a JSON response by serializing the Go structs from the Kubernetes API package can trigger a hot loop of child recreation because Metacontroller thinks the hook wants to set metadata.creationTimestamp to null. This shows up in the Metacontroller logs as something like: manage_children.go:143] reflect diff: a=observed, b=desired: object[metadata][creationTimestamp]: a: "2018-08-20T21:21:50Z" b: <nil> This happens because Metacontroller's apply semantics currently assume any field present in the hook response is a field the hook author explicitly wants to set. However, metadata.creationTimestamp is always serialized into the hook response regardless of whether the author wants it, because CreationTimestamp in the ObjectMetastruct is a non-pointer struct field, so the omitempty option has no effect: https://github.com/kubernetes/apimachinery/blob/dcde72c465a0edef321bedc44fe1c16990970efe/pkg/apis/meta/v1/types.go#L175 Incidentally, metav1.Time has a custom JSON marshaler that serializes the zero value of the struct to null instead of "0001-01-01T00:00:00Z" (as you'd expect given it's a non-pointer field), but this doesn't change anything as far as Metacontroller is concerned. The problem is that the field gets serialized at all; we would prefer that it gets omitted. At a high level, marshaling to JSON from a statically-typed language like Go makes it hard for hook authors to follow the guidance from Metacontroller that they should omit fields they don't care about. You would have to diligently ensure that all fields have proper omitempty semantics for uninitialized struct fields. In Go, that essentially requires using pointers for all struct fields, but if you import the structs from upstream Kubernetes, you don't have control over that. One option could be to change Metacontroller so it doesn't try to update the field if the desired value matches the last-applied value. I actually thought it already does that because the intention was to match kubectl apply semantics, but then I remembered that Metacontroller is more aggressive about pushing towards desired state on purpose -- controllers generally ought to be more persistent than kubectl. Most controller authors would probably expect that their controller should "reset" the field back to the desired value if someone else changes it. With that said, I don't think we should rule out this option. Another option is to somehow work around the problem by figuring out that, for some fields, null really means "I don't care" and not "please set this to null". However, that would require specific knowledge of the schema of objects being processed, which Metacontroller so far manages to avoid. Is there any workaround for this? I’m using the kubernetes go library in my golang hook and seeing this same issue. To unblock people who prefer to write hooks in Go, I've proposed #94 which enforces the invariant that Metacontroller should never try to update read-only, system-populated metadata fields. That should fix the immediate issue here with creationTimestamp, but it remains to be seen if we'll encounter other problems outside ObjectMeta where the Kubernetes Go API failed to use pointers for omitempty struct fields. As mentioned above, the fix in #94 wouldn't work if there are other fields outside ObjectMeta that don't get omitted when left on their Go zero values. For Pod, it seems like we are safe, but @danisla has found a new example in ReplicaSet (status.replicas is intentionally not omitempty), which shows that there will likely be other examples scattered throughout the Go API structs. Given that, I'm leaning towards the other alternative proposed in the first post of this issue, which is to make Metacontroller's apply semantics match kubectl apply more closely in this scenario: You apply something like replicas: 0. Something else (e.g. Horizontal Pod Autoscaler) edits the object to set replicas: 5. You apply again, but you still say replicas: 0. In the case of kubectl apply, the HPA "wins" and your second apply leaves replicas at 5. That's because when you apply the same value as last time for a given field, kubectl assumes you mean that you don't care to change that field right now, so it remains at whatever value the live object on the server has. By contrast, currently in that scenario Metacontroller assumes the controller is trying to maintain the state replicas: 0 even in the face of outside interference. I thought it would be surprising to a controller author if, for example, the user directly alters a field (e.g. with kubectl edit), and the controller doesn't reset it back to the "right" value. After all, the job of a controller is to continually push towards the desired state. If we instead make Metacontroller match the kubectl apply behavior, it should prevent the whole class of endless updates discussed in this issue, but it might cause surprises in the other direction where people expect updates and they don't happen. If any users want to weigh in on which behavior would be least surprising, that would help. I think most of my use cases would prefer to match the behavior of kubectl apply. Can we have both and pick the right behavior in the controller config? The example is a little weird Sorry for using a confusing example. I actually did mean spec.replicas in the HPA example, but I was using it as a stand-in for a hypothetical spec field that is not omitempty. Given that we found status.replicas is not omitempty, I'm worried there are also spec fields lurking in the API that are not omitempty. I tend to think kubectl apply semantics are more useful when a human is on the other side of the client. Thanks for weighing in. This seems to confirm my suspicion that there are people out there who would reasonably be surprised by Metacontroller giving up so easily on enforcing what you ask it to. @danisla wrote: Can we have both and let the author pick the appropriate behavior via the controller config? We could do that, but I'm worried that it will be difficult to know when you should use that setting, especially if the failure mode for using the wrong setting is endless updates. It would also force you to choose "give up semantics" instead of "fight semantics" just because you wrote your hook in Go. Even if we support both semantics, the language you use really ought to be orthogonal to your choice of apply semantics. Maybe we can do a grep survey of existing Kubernetes APIs and see if my fear that there are non-omitempty spec fields is founded. If it's only status fields, we can introduce special behavior for that section of the object. Of course, it's always possible that new API fields will be introduced that break the rule. Also, people can easily make that mistake while writing their own Go structs for custom APIs. So my temporary workaround is to copy the child type structs from the k8s go API to my controller and just omit their Status field so that when it's marshaled to JSON, the status field is omitted. This feels like a hack, but it emphasizes the importance of knowing how your data gets serialized with Go. Maybe we can communicate some of these golang and k8s API caveats in the metacontroller docs? Another thing that bit me and caused a hot loop was not having a stable serialization of my parent spec. My spec contained a field of typemap[string]string, which has no order when being marshaled to JSON. As a result, the metacontroller.k8s.io/last-applied-configuration would change frequently because the order of the map items would get shuffled by the encoder. This was fixed by converting my parent CRD to a list of structs rather than a map. Not sure if there is any way around this, or if metacontroller had some way of creating a more stable encoding of the last-applied-configuration. Thanks @enisoc, I'll keep debugging now that I know the map order shouldn't affect it. Hi, I'm running into this hot loop as well could it be because the status of my resources have conditions that include a lastProbeTime? Something like this: status: conditions: - lastProbeTime: "2019-05-22T14:15:49Z" status: "True" type: Foo - lastProbeTime: "2019-05-22T14:15:49Z" status: "False" type: bar BTW, I'm using https://github.com/tidwall/gjson and https://github.com/tidwall/sjson so the problem is not the same as originally reported. I have a const with an empty template of the child resource I want to generate and then I use sjson.Set() to populate it. something like this: const cnameCRD = `{"apiVersion": "example.com/v1","kind": "Cname","metadata": {"name": "","namespace": ""},"spec": {"record": "","target": ""}}` // newCname Returns a new CNAME CRD in json format func newCname(name, cname string) string { nameParts := strings.Split(name, ".") crd, _ := sjson.Set(cnameCRD, "metadata.name", nameParts[0]) crd, _ = sjson.Set(crd, "metadata.namespace", nameParts[1]) crd, _ = sjson.Set(crd, "spec.record", cname) crd, _ = sjson.Set(crd, "spec.target", name) return crd } And this is my response to metacontroller: { "status": { "cnameRef": "", "hostsStatus": [], "conditions": [ { "lastProbeTime": "2019-05-22T14:48:06Z", "status": "False", "type": "GSLBResolves" }, { "lastProbeTime": "2019-05-22T14:48:06Z", "status": "False", "type": "CNAMEResolves" } ] }, "children": [ { "apiVersion": "example.com/v1", "kind": "Cname", "metadata": { "name": "service-1", "namespace": "new-crd" }, "spec": { "record": "new-crd.dev.example.com", "target": "service-1.new-crd.dev.example.com" } } ] }
2025-04-01T04:10:27.680966
2024-12-17T00:07:24
2743650126
{ "authors": [ "helensilva14" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:14367", "repo": "GoogleCloudPlatform/professional-services-data-validator", "url": "https://github.com/GoogleCloudPlatform/professional-services-data-validator/pull/1373" }
gharchive/pull-request
docs: Update docs/connections.md and samples/oracle/README.md Closes #1268 Closes #1132 /gcbrun /gcbrun
2025-04-01T04:10:27.692749
2022-09-09T17:48:10
1368145012
{ "authors": [ "fmichaelobrien", "obriensystems" ], "license": "apache-2.0", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:14368", "repo": "GoogleCloudPlatform/pubsec-declarative-toolkit", "url": "https://github.com/GoogleCloudPlatform/pubsec-declarative-toolkit/issues/104" }
gharchive/issue
Add logging role binding and region/org_id instructions during logging storage bucket update - pre CC creation See the addition at step 0 https://github.com/GoogleCloudPlatform/pubsec-declarative-toolkit/tree/main/solutions/landing-zone#usage cloud alpha logging settings update --organization=$ORG_ID --storage-location=$REGION on a clean org we need to set the logging admin role as well as derive the org id michael@cloudshell:~/wse_github/GoogleCloudPlatform (pubsec-declarative-tk-gz)$ export REGION=northamerica-northeast1 michael@cloudshell:~/wse_github/GoogleCloudPlatform (pubsec-declarative-tk-gz)$ gcloud alpha logging settings update --organization=$ORG_ID --storage-location=$REGION ERROR: (gcloud.alpha.logging.settings.update) User<EMAIL_ADDRESS>does not have permission to access organizations instance [925207728429] (or it may not exist): Permission 'logging.cmekSettings.update' denied on resource (or it may not exist). fix... michael@cloudshell:~/wse_github/GoogleCloudPlatform (pubsec-declarative-tk-gz)$ export PROJECT_ID=$(gcloud config list --format 'value(core.project)') michael@cloudshell:~/wse_github/GoogleCloudPlatform (pubsec-declarative-tk-gz)$ echo $PROJECT pubsec-declarative-tk-gz michael@cloudshell:~/wse_github/GoogleCloudPlatform (pubsec-declarative-tk-gz)$ export ORG_ID=$(gcloud projects get-ancestors $PROJECT_ID --format='get(id)' | tail -1) michael@cloudshell:~/wse_github/GoogleCloudPlatform (pubsec-declarative-tk-gz)$ echo $ORG_ID 925207728429 michael@cloudshell:~/wse_github/GoogleCloudPlatform (pubsec-declarative-tk-gz)$ export<EMAIL_ADDRESS>michael@cloudshell:~/wse_github/GoogleCloudPlatform (pubsec-declarative-tk-gz)$ gcloud organizations add-iam-policy-binding "${ORG_ID}" --member "user:${EMAIL}" --role roles/logging.admin Updated IAM policy for organization [925207728429]. michael@cloudshell:~/wse_github/GoogleCloudPlatform (pubsec-declarative-tk-gz)$ gcloud alpha logging settings update --organization=$ORG_ID --storage-location=$REGION name: organizations/925207728429 storageLocation: northamerica-northeast1 continue KCC CC creation michael@cloudshell:~/wse_github/GoogleCloudPlatform (pubsec-declarative-tk-gz)$ arete create landing-zone-controller --region=northamerica-northeast1 5:36PM INF Project name will be set to: landing-zone-controller-e4g7d ✔ My Billing Account - 015F84-8FD578-D96F04 ✔ gcp.zone -<PHONE_NUMBER>29 ✔ Folder Level ✔ pdt -<PHONE_NUMBER>9 5:36PM INF Create in progress for [https://cloudresourcemanager.googleapis.com/v1/projects/landing-zone-controller-e4g7d].Waiting for [operations/cp.8884632004916483157] to finish.....done.Enabling service [cloudapis.googleapis.com] on project [landing-zone-controller-e4g7d]...Operation "operations/acat.p2-791826419490-8b10ad41-7c22-4a31-88bb-30abad9f31d6" finished successfully.Updated property [core/project] to [landing-zone-controller-e4g7d]. 5:36PM INF Enabling required services... 5:37PM INF Operation "operations/acf.p2-791826419490-2789779d-61a9-46a0-b3db-8c6217fadace" finished successfully. 5:37PM INF Creating Network... 5:37PM INF Creating subnet.... 5:38PM INF Creating Config Controller Cluster... ller].Fetching cluster endpoint and auth data.kubeconfig entry generated for krmapihost-landing-zone-controller. 6:02PM INF Add SA to roles/owner role... 6:02PM INF Config Controller setup complete add roles (step 1) michael@cloudshell:~/wse_github/GoogleCloudPlatform (landing-zone-controller-e4g7d)$ pwd /home/michael/wse_github/GoogleCloudPlatform michael@cloudshell:~/wse_github/GoogleCloudPlatform (landing-zone-controller-e4g7d)$ kpt pkg get https://github.com/GoogleCloudPlatform/pubsec-declarative-toolkit.git/solutions/landing-zone landing-zone Package "landing-zone": Fetching https://github.com/GoogleCloudPlatform/pubsec-declarative-toolkit@main From https://github.com/GoogleCloudPlatform/pubsec-declarative-toolkit * branch main -> FETCH_HEAD * [new branch] main -> origin/main Adding package "solutions/landing-zone". Fetched 1 package(s). edit settings.yaml billing-id: "015F84-8FD578-D96F04" org-id: "925207728429" ############# # Management Project # This is the project where the config controller instance is running # Values can be viewed in the Project Dashboard management-project-id: landing-zone-controller-e4g7d management-project-number: "791826419490" ############# # Project IDs # These are the IDs for the projects that will be created by the LZ script # All IDs should be universally unique # Must be 6 to 30 characters in length. # Can only contain lowercase letters, numbers, and hyphens. # Must start with a letter. # Cannot end with a hyphen. # Cannot be in use or previously used; this includes deleted projects. # Cannot contain restricted strings, such as google and ssl. net-host-prj-nonprod-id: net-host-prj-nonprod-gz1 net-host-prj-prod-id: net-host-prj-prod-gz1 net-perimeter-prj-common-id: net-perimeter-prj-common-gz1 audit-prj-id: audit-prj-id-gz1 guardrails-project-id: guardrails-project-gz1 ############# # Groups # Permissions will be assigned to the specified group email audit-viewer<EMAIL_ADDRESS> log-writer<EMAIL_ADDRESS> log-reader<EMAIL_ADDRESS> organization-viewer<EMAIL_ADDRESS> add policies exclusion https://github.com/GoogleCloudPlatform/pubsec-declarative-toolkit/issues/103 michael@cloudshell:~/wse_github/GoogleCloudPlatform (landing-zone-controller-e4g7d)$ cat landing-zone/.krmignore cicd-examples/ environments/common/policies michael@cloudshell:~/wse_github/GoogleCloudPlatform (landing-zone-controller-e4g7d)$ cd landing-zone/ michael@cloudshell:~/wse_github/GoogleCloudPlatform/landing-zone (landing-zone-controller-e4g7d)$ kpt fn render Package "landing-zone/environments/common/guardrails-policies": Package "landing-zone/environments/common": [RUNNING] "gcr.io/kpt-fn/set-namespace:v0.4.1" [PASS] "gcr.io/kpt-fn/set-namespace:v0.4.1" in 2.8s Results: [info]: namespace "common" updated to "config-control", 23 value(s) changed Package "landing-zone/environments/nonprod": [RUNNING] "gcr.io/kpt-fn/set-namespace:v0.4.1" [PASS] "gcr.io/kpt-fn/set-namespace:v0.4.1" in 400ms Results: [info]: namespace "nonprod" updated to "config-control", 7 value(s) changed Package "landing-zone/environments/prod": [RUNNING] "gcr.io/kpt-fn/enable-gcp-services:v0.1.0" [PASS] "gcr.io/kpt-fn/enable-gcp-services:v0.1.0" in 3.7s Results: [info] serviceusage.cnrm.cloud.google.com/v1beta1/Service/config-control/prod-nethost-service-compute: generated service [info] serviceusage.cnrm.cloud.google.com/v1beta1/Service/config-control/prod-nethost-service-logging: generated service [RUNNING] "gcr.io/kpt-fn/set-namespace:v0.4.1" [PASS] "gcr.io/kpt-fn/set-namespace:v0.4.1" in 400ms Results: [info]: namespace "prod" updated to "config-control", 4 value(s) changed Package "landing-zone": [RUNNING] "gcr.io/kpt-fn/apply-setters:v0.2" [PASS] "gcr.io/kpt-fn/apply-setters:v0.2" in 2.6s Results: [info] metadata.annotations.cnrm.cloud.google.com/organization-id: set field value to "925207728429" [info] metadata.annotations.cnrm.cloud.google.com/organization-id: set field value to "925207728429" [info] spec.projectID: set field value to "net-perimeter-prj-common-gz1" [info] spec.parentRef.external: set field value to "925207728429" ...(87 line(s) truncated, use '--truncate-output=false' to disable) [RUNNING] "gcr.io/kpt-fn/generate-folders:v0.1.1" [PASS] "gcr.io/kpt-fn/generate-folders:v0.1.1" in 5.9s [RUNNING] "gcr.io/kpt-fn/enable-gcp-services:v0.1.0" [PASS] "gcr.io/kpt-fn/enable-gcp-services:v0.1.0" in 2.5s Results: [info] serviceusage.cnrm.cloud.google.com/v1beta1/Service/config-control/nonprod-nethost-service-compute: generated service [info] serviceusage.cnrm.cloud.google.com/v1beta1/Service/config-control/nonprod-nethost-service-dns: generated service [info] serviceusage.cnrm.cloud.google.com/v1beta1/Service/config-control/nonprod-nethost-service-logging: generated service [info] serviceusage.cnrm.cloud.google.com/v1beta1/Service/config-control/prod-nethost-service-compute: recreated service ...(3 line(s) truncated, use '--truncate-output=false' to disable) [RUNNING] "gcr.io/kpt-fn/gatekeeper:v0.2.1" [PASS] "gcr.io/kpt-fn/gatekeeper:v0.2.1" in 4.4s [RUNNING] "gcr.io/kpt-fn/kubeval:v0.3.0" [PASS] "gcr.io/kpt-fn/kubeval:v0.3.0" in 27.5s Successfully executed 9 function(s) in 5 package(s). michael@cloudshell:~/wse_github/GoogleCloudPlatform/landing-zone (landing-zone-controller-e4g7d)$ kpt live init landing-zone --namespace config-control Error: invalid directory argument: landing-zone michael@cloudshell:~/wse_github/GoogleCloudPlatform/landing-zone (landing-zone-controller-e4g7d)$ cd .. michael@cloudshell:~/wse_github/GoogleCloudPlatform (landing-zone-controller-e4g7d)$ kpt live init landing-zone --namespace config-control initializing Kptfile inventory info (namespace: config-control)...success michael@cloudshell:~/wse_github/GoogleCloudPlatform (landing-zone-controller-e4g7d)$ kpt live apply landing-zone --reconcile-timeout=2m --output=table Error: 4 resource types could not be found in the cluster or as CRDs among the applied resources. Resource types: constraints.gatekeeper.sh/v1beta1, Kind=NamingPolicy constraints.gatekeeper.sh/v1beta1, Kind=DataLocation constraints.gatekeeper.sh/v1beta1, Kind=LimitEgressTraffic constraints.gatekeeper.sh/v1beta1, Kind=CloudMarketPlaceConfig michael@cloudshell:~/wse_github/GoogleCloudPlatform (landing-zone-controller-e4g7d)$ cat landing-zone/.krmignore cicd-examples/ environments/common/policies wrong exclusion environments/common/guardrails-policies one of them is not in the gr policies michael@cloudshell:~/wse_github/GoogleCloudPlatform (landing-zone-controller-e4g7d)$ kpt live apply landing-zone --reconcile-timeout=2m --output=table Error: 1 resource types could not be found in the cluster or as CRDs among the applied resources. Resource types: constraints.gatekeeper.sh/v1beta1, Kind=NamingPolicy add environments/common/general-policies/naming-rules working replying to each issue comment in sequence, this is one of my earlier ones before the BUG template The PR 105 under 104 is a fix for adding a missing logging admin role (and associated org derivation) before we set the org flag on the default location A new user coming into the deployment will not have this role set. We need to run the deployments on GCP accounts that are clean or do not have previous runs of the deployment (they will already have the roles) - you won't see this issue on your system unless it is new As well as a fix caught by our collaborator on the existing command (gcloud vs cloud) - we will need the same fix in the guardrails I have only regression tested the landing-zone solution - not the guardrails solution on the change that moved towards #104 https://github.com/GoogleCloudPlatform/pubsec-declarative-toolkit/pull/92/files this is what a new GCP account will see - an expected missing role See the reproduction section in the 104 issue michael@cloudshell:~/wse_github/GoogleCloudPlatform (pubsec-declarative-tk-gz)$ gcloud alpha logging settings update --organization=$ORG_ID --storage-location=$REGION ERROR: (gcloud.alpha.logging.settings.update) User<EMAIL_ADDRESS>does not have permission to access organizations instance [925207728429] (or it may not exist): Permission 'logging.cmekSettings.update' denied on resource (or it may not exist). after fix michael@cloudshell:~/wse_github/GoogleCloudPlatform (pubsec-declarative-tk-gz)$ gcloud alpha logging settings update --organization=$ORG_ID --storage-location=$REGION name: organizations/925207728429 storageLocation: northamerica-northeast1 see the PR for discussion https://github.com/GoogleCloudPlatform/pubsec-declarative-toolkit/pull/105 guardrails partial regression testing Welcome to Cloud Shell! Type "help" to get started. To set your Cloud Platform project in this session use “gcloud config set project [PROJECT_ID]” michael@cloudshell:~$ history 1 history 2 ls 3 history 4 ls 5 history michael@cloudshell:~$ gcloud config set project pubsec-declarative-tk-apgz Updated property [core/project]. michael@cloudshell:~ (pubsec-declarative-tk-apgz)$ export REGION=northamerica-northeast1 michael@cloudshell:~ (pubsec-declarative-tk-apgz)$ export PROJECT_ID=$(gcloud config list --format 'value(core.project)') michael@cloudshell:~ (pubsec-declarative-tk-apgz)$ export ORG_ID=$(gcloud projects get-ancestors $PROJECT_ID --format='get(id)' | tail -1) michael@cloudshell:~ (pubsec-declarative-tk-apgz)$ echo $ORG_ID 431498985862 michael@cloudshell:~ (pubsec-declarative-tk-apgz)$ gcloud alpha logging settings update --organization=$ORG_ID --storage-location=$REGION ERROR: (gcloud.alpha.logging.settings.update) User<EMAIL_ADDRESS>does not have permission to access organizations instance [431498985862] (or it may not exist): Permission 'logging.cmekSettings.update' denied on resource (or it may not exist). - '@type': type.googleapis.com/google.rpc.ErrorInfo domain: iam.googleapis.com metadata: permission: logging.cmekSettings.update reason: IAM_PERMISSION_DENIED michael@cloudshell:~ (pubsec-declarative-tk-apgz)$ export EMAIL=$(gcloud config list --format json|jq .core.account | sed 's/"//g') michael@cloudshell:~ (pubsec-declarative-tk-apgz)$ echo $EMAIL <EMAIL_ADDRESS>michael@cloudshell:~ (pubsec-declarative-tk-apgz)$ gcloud organizations add-iam-policy-binding "${ORG_ID}" --member "user:${EMAIL}" --role roles/logging.admin Updated IAM policy for organization [431498985862]. bindings: - members: - domain:approach.gcp.zone role: roles/billing.creator - members: -<EMAIL_ADDRESS> role: roles/logging.admin - members: -<EMAIL_ADDRESS> role: roles/orgpolicy.policyAdmin - members: -<EMAIL_ADDRESS> role: roles/resourcemanager.folderAdmin - members: -<EMAIL_ADDRESS> role: roles/resourcemanager.organizationAdmin - members: - domain:approach.gcp.zone role: roles/resourcemanager.projectCreator etag: BwXofMrmLtY= version: 1 michael@cloudshell:~ (pubsec-declarative-tk-apgz)$ gcloud alpha logging settings update --organization=$ORG_ID --storage-location=$REGION name: organizations/431498985862 storageLocation: northamerica-northeast1
2025-04-01T04:10:27.711268
2018-12-20T14:19:52
393066257
{ "authors": [ "chanseokoh", "smil2k" ], "license": "apache-2.0", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:14369", "repo": "GoogleContainerTools/jib", "url": "https://github.com/GoogleContainerTools/jib/issues/1370" }
gharchive/issue
Generated things in classpath does not get copied Description of the issue: I have dependency plugin unpacking the webui we would like to display into target/classes/public. This directory is picked up by spring-boot later and displayed. I just noticed, that JIB does not pick this directory though. Expected behavior: Jib should put everything available under classes into the container. Steps to reproduce: add plugin: <plugin> <groupId>com.google.cloud.tools</groupId> <artifactId>jib-maven-plugin</artifactId> <version>1.0.0-rc1</version> <configuration> <container> <ports>8080</ports> </container> <to> <tags> <t>latest</t> <t>${project.version}</t> </tags>  </to> </configuration> </plugin> create some files under /classes (eg test.js) run jib:buildTar The resulting container won't contain the .js file. Environment: maven 3.6 Have you checked /app/resources/public? Sorry, my fault.
2025-04-01T04:10:27.727795
2024-04-23T20:11:32
2259690542
{ "authors": [ "Spark450", "chrisolsen" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:14370", "repo": "GovAlta/ui-components", "url": "https://github.com/GovAlta/ui-components/issues/1793" }
gharchive/issue
Skeleton: profile type not rendering properly <goa-skeleton type="profile"></goa-skeleton> @ArakTaiRoth Please get in touch if you have any questions.
2025-04-01T04:10:27.756533
2021-11-16T14:13:50
1054929969
{ "authors": [ "IlayRosenberg", "Jongy" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:14371", "repo": "Granulate/gprofiler", "url": "https://github.com/Granulate/gprofiler/pull/213" }
gharchive/pull-request
Added java version parsing and enhanced version checks Description Enhanced the current java version check mechanism to parse the output of java -version and compare it against known "good" versions Related Issue Motivation and Context How Has This Been Tested? Hasn't been tested yet Screenshots Checklist: [ ] I have read the CONTRIBUTING document. [ ] I have updated the relevant documentation. [ ] I have added tests for new logic. We can add simple automatic tests that patch the min version and check for the matching error in the log, similarly to https://github.com/Granulate/gprofiler/pull/212/commits/a15735d93232c6a6225a5ae8890e473114923897
2025-04-01T04:10:27.761464
2022-12-21T13:16:34
1506283636
{ "authors": [ "Jongy", "pfilipko1" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:14372", "repo": "Granulate/gprofiler", "url": "https://github.com/Granulate/gprofiler/pull/642" }
gharchive/pull-request
dotnet profiler on windows Description Attempt to enable dotnet profiler on the windows OS. Related Issue https://github.com/Granulate/gprofiler/issues/623 Motivation and Context Improving windows compatibility will result in increased usability of gprofiler How Has This Been Tested? Screenshots Checklist: [x] I have read the CONTRIBUTING document. [x] I have updated the relevant documentation. [ ] I have added tests for new logic. A few small comments :) Can you post a sample log of gProfiler's run on Windows, profiling a dotnet app? gprofiler-dotnet-on-windows.log Here's a file with a sample log of gprofiler profiling dotnet on windows. Update from master and ensure Windows CI passes :) I'm ready to merge afterwards. Can merge from master after the dotnet change @marcin-ol this looks good from my end, PTAL and approve?
2025-04-01T04:10:27.762444
2022-09-13T21:42:43
1372037780
{ "authors": [ "StevenGolden1203", "tu-yiwen" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:14373", "repo": "Graph-Learning-Benchmarks/gli", "url": "https://github.com/Graph-Learning-Benchmarks/gli/pull/297" }
gharchive/pull-request
Add datasets Description Add minst and cifar datasets. Both are graph classification datasets. @jiaqima I think we can merge this PR.
2025-04-01T04:10:27.779837
2015-11-09T16:39:37
115915243
{ "authors": [ "mattvella07", "pmuens", "zaverichintan" ], "license": "mit", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:14374", "repo": "GravityProject/gravity", "url": "https://github.com/GravityProject/gravity/issues/8" }
gharchive/issue
Code snippets Share and comment code snippets. Looks like markdown supports code highlighting: https://github.com/adam-p/markdown-here/wiki/Markdown-Cheatsheet#code, so I think #43 will solve this one too. Yep. This might be related! Done.
2025-04-01T04:10:27.813534
2023-10-22T17:58:25
1955995398
{ "authors": [ "Avey777", "Singosgu", "stockarea" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:14375", "repo": "GreaterWMS/GreaterWMS", "url": "https://github.com/GreaterWMS/GreaterWMS/issues/340" }
gharchive/issue
[FR] Integrate MYSQL DATABASE Is your feature request the result of a bug? Please link it here. Problem A clear and concise description of what the problem is. e.g. I'm always frustrated when [...] Right now, the database resides with sqlite and its very limiting. By when can we have MYSQL OR POSTGRESQL Suggested solution A clear and concise description of what you want to happen. Describe alternatives you've considered A clear and concise description of any alternative solutions or features you've considered. Examples of other systems Show how other software handles your FR if you have examples. Do you want to develop this? If so please describe briefly how you would like to implement it (so we can give advice) and if you have experience in the needed technology (you do not need to be a pro - this is just as a information for us). pip install mysqlclient pip install mysqlclient How to use MySQL
2025-04-01T04:10:27.849915
2023-08-24T13:26:26
1865158552
{ "authors": [ "jmcook1186", "narekhovhannisyan" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:14380", "repo": "Green-Software-Foundation/ief", "url": "https://github.com/Green-Software-Foundation/ief/pull/94" }
gharchive/pull-request
adds shell command imp Types of changes [ ] Enhancement (project structure, spelling, grammar, formatting) [ ] Bug fix (non-breaking change which fixes an issue) [x] New feature (non-breaking change which adds functionality) [ ] Breaking change (fix or feature that would cause existing functionality to change) [x] My code follows the code style of this project. [x] My change requires a change to the documentation. [ ] I have updated the documentation accordingly. [ ] I have added tests to cover my changes. [ ] All new and existing tests passed. A description of the changes proposed in the Pull Request implements the shell command IMP requested in #83 adds a new file to src/lib containing a single function shellCommand() that does the following: spawns a child process that calls a local executable handles the data from the local executable passes an error to stderr if the call fails the local executable is expected to expose a cli that accepts two arguments: --calculate and --impl the returned data is displayed in the console via stdout and saved locally as a yaml file Note This is in draft pending: collab wit @gnanakeethan on IEF integration collab with folks working on #84 (pimpl.py), including refining the error handling Also you can rename file name as shell-imp.ts to be in the same convention with the other repo parts. Also, I'm out of the context, but the module name and the function naming seem too general. F.ex. shellCommand can be renamed to something that will describe better what the function does.
2025-04-01T04:10:27.912694
2023-08-04T12:24:12
1836654942
{ "authors": [ "waynexia" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:14381", "repo": "GreptimeTeam/greptimedb", "url": "https://github.com/GreptimeTeam/greptimedb/pull/2106" }
gharchive/pull-request
docs: rfc of refactoring table trait I hereby agree to the terms of the GreptimeDB CLA What's changed and what's your intention? RFC of refactoring the table trait. rendered Checklist [ ] I have written the necessary rustdoc comments. [ ] I have added the necessary unit tests and integration tests. Refer to a related PR or issue link (optional) #2065 The scan_to_stream is kind of confused. Why not just the scan. The stream should be the first-class citizen. There was a scan(), which doesn't return RecordBatchStream but PhysicalPlan instead. The scan_to_stream() is introduced in https://github.com/GreptimeTeam/greptimedb/pull/1639, there was a time that both methods existed and this is why it has a weird name
2025-04-01T04:10:27.915825
2024-05-03T02:49:09
2276803450
{ "authors": [ "tisonkun" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:14382", "repo": "GreptimeTeam/greptimedb", "url": "https://github.com/GreptimeTeam/greptimedb/pull/3854" }
gharchive/pull-request
ci: replace pull-request actions with cyborg I hereby agree to the terms of the GreptimeDB CLA. Refer to a related PR or issue link (optional) What's changed and what's your intention? Reduce external actions dependencies. You can see https://github.com/tisonkun/greptimedb/actions/workflows/semantic-pull-request.yml for examples. Checklist [ ] I have written the necessary rustdoc comments. [ ] I have added the necessary unit tests and integration tests. [x] This PR does not require documentation updates. https://github.com/tisonkun/greptimedb/actions/workflows/semantic-pull-request.yml showcases the action's manner.
2025-04-01T04:10:27.925218
2023-08-15T08:02:33
1851051503
{ "authors": [ "DropD", "egparedes", "havogt" ], "license": "BSD-3-Clause", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:14383", "repo": "GridTools/gt4py", "url": "https://github.com/GridTools/gt4py/pull/1319" }
gharchive/pull-request
feat[next] high-level field storage API Introduce user API to allocate fields in gt4py.next. Summary of main changes: Introduce FieldBuffer allocator protocols and implementations Introduce the concept of Backend as ProgramExecutor & Allocator Replace np_as_located_field with as_field Make NdArrayField public Fixes for _core.definitions typings Fixes and extensions of eve.extended_typing Refactor the handling of backends/program processors in the testing infrastructure with string enumerations representing the qualified name of the Python symbol, which can be loaded on demand Rename some executor symbols and modules Minor style changes to imports and imported symbols to follow coding guidelines. Open"To Do"s for future PRs: Add support for GTFieldInterface protocol in cartesian and use it instead of NextGTDimsInterface protocol in next. Add support for aligned_index != None in FieldBufferAllocator implementations Add support for zero-copy construction of fields in constructors.as_field() I propose we document all TODOs Added open TODOs to the PR description cscs-ci run
2025-04-01T04:10:27.926888
2020-11-23T14:34:50
748853800
{ "authors": [ "DropD" ], "license": "BSD-3-Clause", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:14384", "repo": "GridTools/gt4py", "url": "https://github.com/GridTools/gt4py/pull/261" }
gharchive/pull-request
Gtc integration numpy ir Description A Numpy IR and generator that can do little more than arithmetic and assignment with some offsets. I think when attaching to OIR the following should be discussed: Represent the parallel "pass" just as a VectorAssignment at the Computation node I thought of this and I don't yet see any benefit from branching out to 3D-assignment OR sequential pass -> 2D assignment from Computation.
2025-04-01T04:10:27.931677
2022-08-10T05:50:06
1334106045
{ "authors": [ "gronerl", "jdahm" ], "license": "BSD-3-Clause", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:14385", "repo": "GridTools/gt4py", "url": "https://github.com/GridTools/gt4py/pull/891" }
gharchive/pull-request
Allow dace v0.14rc1 Description Relaxes the dace version. bors try bors try harder bors try- bors try bors try bors try The failing tests should pass after merging/squashing/rebasing master. As overriding gitlab is currently disable, can @jdahm please do so and then bors try again? bors try Our favorite random test failure is happening again... @gronerl was there a setting that changed in dace? Likely some change on the DaCe side, yes. I couldn't figure out the root of the issue the last time this occured. The no-storage PR that also updates the dace version faced the same issue. There, the parametrization structure of those tests are re-done eventually. For now, let's just remove GPU tests from https://github.com/jdahm/gt4py/blob/11f16ffe7f72c2249f989ae2339872d841c6fc6c/tests/test_integration/test_dace_parsing.py#L145 entirely. I'll do that and add a note to re-enable later. Yet another one. How fun. Looks like #825 will be faster after all, it's even more restrictive on the DaCe version actually. Yeah at this point let's just merge that one. I'll close this.
2025-04-01T04:10:27.935671
2019-02-01T13:23:26
405700068
{ "authors": [ "archy-bold" ], "license": "mit", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:14386", "repo": "GriddleGriddle/Griddle", "url": "https://github.com/GriddleGriddle/Griddle/issues/847" }
gharchive/issue
Cannot Filter on Nested Data using LocalPlugin Griddle version 1.13.1 Expected Behavior Entering a query in the filter box which matches a nested field should return the row with that data. Actual Behavior Entering a query in the filter box which matches a nested field returns no results. Steps to reproduce Run the storybook and browse to the 'with nested column data' story. Enter 'hawaii' in the filter, observe there are no results. Pull request with failing test or storybook story with issue Can be reproduced from the storybook. Edit: it appears this last worked in v1.11.2 So having delved into this a little, it appears the reason for this is that rather than allow filtering on all base (non-nested) columns except those with fllterable set to false, it now ignores any column, without ColumnDefinition set. ie if a column isn't defined in the ColumnDefinitions it's not filtered on at all. A workaround is to define your nested property as a non-visible column, so it's included in the list of columns to filter. This performs toString() on the nested Map or List object, exposing all the properties. <ColumnDefinition id="location" visible={false} /> It doesn't feel like starting with the base properties and then filtering those based on the columnProperties is the right way round, especially considering this will mean all nested properties will be ignored.
2025-04-01T04:10:27.937254
2016-03-12T06:21:46
140352157
{ "authors": [ "DudeMcDude" ], "license": "mit", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:14387", "repo": "GrognardsFromHell/TemplePlus", "url": "https://github.com/GrognardsFromHell/TemplePlus/issues/176" }
gharchive/issue
Corrupt Save Case Study #1 Crashes due to null obj handle when moving your characters https://drive.google.com/open?id=0BzF5KfpDewVYek01NFpjM3RUUHc Reported by player to crash when going upstairs to Terjon Currently working on making ObjectEventListRangeUpdate safe (check for null handle first)
2025-04-01T04:10:27.968701
2024-05-20T13:42:47
2306033319
{ "authors": [ "CatChenal" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:14389", "repo": "GunnerLab/MCCE_crgms", "url": "https://github.com/GunnerLab/MCCE_crgms/issues/1" }
gharchive/issue
Output tautomers in get_topN_crg_microstates To reduce processing, only output tautomers if res_of_interest list not empty, which means all residues. Entails: reverting to 3-letters AA codes. parsing tpl CONF_LIST. tcgrms has been integrated into MCCE4.
2025-04-01T04:10:27.975880
2024-08-21T19:32:30
2478906243
{ "authors": [ "GustavEikaas", "franroa" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:14390", "repo": "GustavEikaas/easy-dotnet.nvim", "url": "https://github.com/GustavEikaas/easy-dotnet.nvim/issues/35" }
gharchive/issue
feat(testrunner):🎁 Group flat tests by default group name In some cases tests does not belong to any namespace. It would be nice to assign these to some sort of default namespace so you can target their execution as a group, and also for a cleaner UI. Inspired by: #33 @franroa Do you have example code of how to make a test that shows up as flat? I tried this but it still groups by classname.testname Hi! Thanks for working on this and for the other issues! Yes, I will try to prepare an example tomorrow Hallo! here you have an example: https://github.com/franroa/specflow_example (in this file you have the test definition in case you want to put some more tests https://github.com/franroa/specflow_example/blob/main/PrimeService.Tests/Feature/Test.feature) Im struggling a bit with making a "fake" group for flat tests but the PR i linked will atleast group them under the csproject they belong to. Which should hopefully help a little bit I think that is a good improvement. To me is more than enough. thanks! Closing this issue for now, I really cant imagine fixing this more than I already have by grouping under csproject
2025-04-01T04:10:27.994271
2019-12-30T17:58:54
543961855
{ "authors": [ "H4CKY54CK", "PythonCoderAS" ], "license": "BSD-2-Clause", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:14391", "repo": "H4CKY54CK/praw", "url": "https://github.com/H4CKY54CK/praw/pull/4" }
gharchive/pull-request
Deal with merge conflict and add in new types Merges with upstream Deals with conflict Adds in type data for new methods Fixes flake8 issues Ha. H4CKY54CK -1 | GitHub - more than 1 Thanks, I didn't realize you had done this.
2025-04-01T04:10:28.010082
2022-10-25T08:19:40
1422059495
{ "authors": [ "DimitriB01", "juli-txt" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:14392", "repo": "HASKI-Kempten/HASKI-Frontend", "url": "https://github.com/HASKI-Kempten/HASKI-Frontend/issues/11" }
gharchive/issue
[Technical] Understand funktionality of JWT-Token and generate one (Proof of concept) The communication between Moodle and HASKI needs a JWT-Token to work together. In this Issue we want to understand the functionality of this Token and generate one ourselfs. You can get the Public-Key with MongoDB JSON Web Token 1. Definition and use JWT is an open standard to transmit information securely between parties as an compact JSON object. By applying a digital signature the message and its containing information can be verified and trusted. JSON Web Tokens are therefore used for authorization and secure information exchange [1]. 2. Composition JWTs are a combination of the three parts „Header“, „Payload“ and „Signature“ seperated by dots with the following format [1]: HEADER.PAYLOAD.SIGNATURE Header The header consists of two parts and is Base64Url encoded. First, the algorithm which is used for encryption and second, the token type [1]. Example: { „alg“: „HS256“, „typ“: „JWT“ } Payload The payload is Base64Url encoded and contains claims which represent statements about an entity like the user. Additionally it can provide supplementary data. There are three types of claims: Registered, public and private claims. Registered claims are predefined and not mandatory but recommended. They can be seen here: https://www.rfc-editor.org/rfc/rfc7519#section-4.1. Public claims on the other hand can be defined by those who are using JWTs. To avoid collision they should be registered here: https://www.iana.org/assignments/jwt/jwt.xhtml. Lastly private claims are custom claims both parties agreed on [1]. Signature The Signature is used to verify that the message wasn’t changed along the way and the sender is who it pretends to be. Furthermore it is created by signing the encoded header, encoded payload and a secret if the HMAC algorithm was used. Alternatively a RSA or ECDSA public/private key pair can replace the secret [1]. To decode, verify and generate JWTs following website can be used: https://jwt.io/. 3. JWT in LTI If a tool in the LTI environment wants to use a LTI Service, it must request an access token from the Authorization Service to affirm its identity. This can be done by providing a JWT with its clients credentials to the Authorization Service [4]. Furthermore LTI Messages sent from the Platform are OpenID Tokens and messages from the tool are JSON Web Tokens [5]. Access tokens are defined in OAuth and can be JWTs or a random string. ID Tokens on the other hand are defined in OpenID Connect and must always be JSON Web Tokens [6]. To confirm the content and the JWT as a whole a confirmation method is needed [2]. Such method checks if the presenter owns a proof-of-possession key [2]. In case of the ltijs library JSON Web Keys (short jwk) is used for confirmation [3]. 4. Source [1] https://jwt.io/introduction [2] https://www.rfc-editor.org/rfc/rfc7800.html [3] https://www.iana.org/assignments/jwt/jwt.xhtml#confirmation-methods [4] https://www.imsglobal.org/spec/lti/v1p3/impl/ [5] https://www.imsglobal.org/spec/lti/v1p3/#platforms-and-tools [6] https://oauth.net/id-tokens-vs-access-tokens/
2025-04-01T04:10:28.021720
2023-06-19T11:56:41
1763380807
{ "authors": [ "paulswithers", "umeli" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:14394", "repo": "HCL-TECH-SOFTWARE/Domino-rest-api", "url": "https://github.com/HCL-TECH-SOFTWARE/Domino-rest-api/issues/3" }
gharchive/issue
link broken ? The discuss this documentation here on Github link is pointing to the 404 hell.... https://github.com/HCL-TECH-SOFTWARE/Domino-rest-api Thanks, I've created a PR internally for this. As a workaround, the "d" of "domino" in the URL needs capitalizing.
2025-04-01T04:10:28.023759
2023-03-18T05:34:50
1630179065
{ "authors": [ "CLAassistant", "sushilpandeyy" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:14395", "repo": "HCL-TECH-SOFTWARE/volt-mx-docs", "url": "https://github.com/HCL-TECH-SOFTWARE/volt-mx-docs/pull/396" }
gharchive/pull-request
Changed the Broken Link with Correct Link In Line number 1584 URL was broken, but now it is fixed. Thank you for your submission! We really appreciate it. Like many open source projects, we ask that you sign our Contributor License Agreement before we can accept your contribution.You have signed the CLA already but the status is still pending? Let us recheck it.
2025-04-01T04:10:28.029025
2020-03-31T22:15:14
591468925
{ "authors": [ "jreadey" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:14396", "repo": "HDFGroup/hsds", "url": "https://github.com/HDFGroup/hsds/issues/46" }
gharchive/issue
Implement RBAC Implement Role Based Access Control (RBAC) so that ACLs can reference roles in addition to individual user names. Implemented in https://github.com/HDFGroup/hsds/commit/0fe25859d45e17b22e558b4b880fb2ed4a120bd6
2025-04-01T04:10:28.051595
2023-10-14T14:26:59
1943293342
{ "authors": [ "HDVinnie" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:14404", "repo": "HDVinnie/TrackerHub", "url": "https://github.com/HDVinnie/TrackerHub/issues/24423" }
gharchive/issue
⚠️ SceneTime has degraded performance In 8e23cc4, SceneTime ($ST) experienced degraded performance: HTTP code: 200 Response time: 1198 ms Resolved: SceneTime performance has improved in 267bc20 after 6 minutes.
2025-04-01T04:10:28.055626
2024-02-01T16:04:11
2112868806
{ "authors": [ "HDVinnie" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:14406", "repo": "HDVinnie/TrackerHub", "url": "https://github.com/HDVinnie/TrackerHub/issues/26212" }
gharchive/issue
⚠️ SceneTime has degraded performance In 284f113, SceneTime ($ST) experienced degraded performance: HTTP code: 200 Response time: 20285 ms Resolved: SceneTime performance has improved in 52c0f70 after 19 minutes.
2025-04-01T04:10:28.059691
2024-04-09T20:11:06
2234220897
{ "authors": [ "HDVinnie" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:14408", "repo": "HDVinnie/TrackerHub", "url": "https://github.com/HDVinnie/TrackerHub/issues/27267" }
gharchive/issue
⚠️ BeyondHD has degraded performance In a0b54ec, BeyondHD ($BHD) experienced degraded performance: HTTP code: 200 Response time: 6779 ms Resolved: BeyondHD performance has improved in 3328fea after 8 minutes.
2025-04-01T04:10:28.155224
2015-04-15T17:50:02
68745227
{ "authors": [ "JasperSnoek", "dougalm", "kswersky", "mattjj" ], "license": "mit", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:14414", "repo": "HIPS/autograd", "url": "https://github.com/HIPS/autograd/issues/14" }
gharchive/issue
Vector valued outputs Please... :-) This is a bit messy but it's a quick wrapper: import autograd.numpy as np from autograd import grad def D(f,outdim): def f_i(i): return lambda *args, **kwargs: f(*args,**kwargs)[i] def deriv(*args,**kwargs): return np.concatenate( [grad(f_i(i))(*args,**kwargs)[None,...] for i in xrange(outdim)]) return deriv It can be used like A = np.random.randn(3,3) def test(v): return np.dot(A,v) print D(test,3)(np.ones(3)) print print A I'm sure it can be improved but I think it reflects the best general strategy (for reverse mode). Jasper, do you want the full Jacobian of a vector-to-vector function or do you just want its diagonal? If it's the Jacobian itself that you want, then you have to loop over the gradients of each output component with respect to the input vector, as Matt shows. If you want just the diagonal, and if the off-diagonal elements are zero (the conversations we've had makes me think that's what you're talking about) then you can just use the gradient of the sum of the output, and the calculation happens in a single pass. For example: >>> import autograd.numpy as np >>> from autograd import grad >>> def jac_diag(fun): ... return grad(lambda x : np.sum(fun(x))) ... >>> x = np.linspace(-3, 3, 5) >>> jac_diag(np.sin)(x) array([-0.9899925, 0.0707372, 1. , 0.0707372, -0.9899925]) >>> np.cos(x) array([-0.9899925, 0.0707372, 1. , 0.0707372, -0.9899925]) I could add the wrapper functions jacobian (Matt's D) and jac_diag if you think they'd be useful... Those wrappers would be useful! That's really neat @mattjj. I already used your solution to quickly put together a Kayak module :-) I took the diagonal of the output of Matt's solution, but yes taking the sum is much simpler. Those wrappers are tremendously useful, but I'm not sure where exactly they'd fit in to autograd. Ok, I'll put them in atugorad.util for now Sounds like there was side channel information about what Jasper really wanted! Support for general derivatives (of maps from R^n to R^m), a.k.a. Jacobians, would be a nice feature even if it's not the main thrust of the library. Then autograd could be used for easy implementations of e.g. extended Kalman filters and smoothers (unless I'm missing something). Maybe the jacobian function could avoid the outdim (or outshape) argument if it ran a single forward pass the first time it was called and inspected (and cached) the shape of the result. @dougalm I don't think that jac_diag function returns the diagonal of the jacobian in general: def test2(x): return np.array([np.sum(x), np.sum(x**2), np.sum(x**3)]) print D(test2,3)(np.ones(3)) def jac_diag(fun): return grad(lambda x: np.sum(fun(x))) print jac_diag(test2)(np.ones(3)) # prints: # [[ 1. 1. 1.] # [ 2. 2. 2.] # [ 3. 3. 3.]] # [ 6. 6. 6.] Exactly. It's a common case that people seem interested in. they have a scalar-to-scalar function and they want its gradient at a number of places. Mike Gelbart and Jon Malmaud were both cross that grad doesn't automatically do this when you give it a vector-to-vector function. But perhaps jac_grad is a misleading name. Maybe elementwise_grad? or map_grad? Or maybe diag_jac. I'm probably parsing too much here, but that could be slightly more suggestive that it computes a diagonal Jacobian (represented by its diagonal elements) rather than the Jacobian's diagonal (in the general case). On the other hand jac_diag probably conveys pretty much the same thing and the docstring can be used to spell out the constraints :). Done.
2025-04-01T04:10:28.178936
2021-02-01T09:58:43
798183557
{ "authors": [ "LeeRenJie" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:14415", "repo": "HITK-TECH-Community/Community-Website", "url": "https://github.com/HITK-TECH-Community/Community-Website/pull/326" }
gharchive/pull-request
Uploaded the new logo for website Issue that this pull request solves Closes: #85 Proposed changes Brief description of what is fixed or changed Uploaded the logo I created into the assets folder. Types of changes Put an x in the boxes that apply [ ] Bugfix (non-breaking change which fixes an issue) [ ] New feature (non-breaking change which adds functionality) [ ] Breaking change (fix or feature that would cause existing functionality to not work as expected) [ ] Documentation update (Documentation content changed) [x] Other (please describe): Upload logo Checklist Put an x in the boxes that apply [x] My code follows the style guidelines of this project [x] I have performed a self-review of my own code [x] I have commented on my code, particularly in hard-to-understand areas [x] I have made corresponding changes to the documentation [x] My changes generate no new warnings [x] My changes do not break the current system and it passes all the current test cases. Screenshots Example: Other information Any other information that is important to this pull request This PR is for DWOC Thank you!
2025-04-01T04:10:28.185706
2021-02-19T09:45:41
811869547
{ "authors": [ "amanprateek123" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:14416", "repo": "HITK-TECH-Community/Community-Website", "url": "https://github.com/HITK-TECH-Community/Community-Website/pull/405" }
gharchive/pull-request
Enhancement in Edit broadcast modal SWOC Issue that this pull request solves Closes: #400 Proposed changes All the requested change in all broadcast page is done Brief description of what is fixed or changed Types of changes Put an x in the boxes that apply [x] Bugfix (non-breaking change which fixes an issue) [ ] New feature (non-breaking change which adds functionality) [ ] Breaking change (fix or feature that would cause existing functionality to not work as expected) [ ] Documentation update (Documentation content changed) [ ] Other (please describe): Checklist Put an x in the boxes that apply [x] My code follows the style guidelines of this project [x] I have performed a self-review of my own code [ ] I have commented my code, particularly in hard-to-understand areas [ ] I have made corresponding changes to the documentation [x] My changes generate no new warnings [x] My changes does not break the current system and it passes all the current test cases. Screenshots Please attach the screenshots of the changes made in case of change in user interface Other information Any other information that is important to this pull request
2025-04-01T04:10:28.211910
2015-05-21T15:05:08
79036107
{ "authors": [ "matthias-springer", "timfel" ], "license": "bsd-3-clause", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:14417", "repo": "HPI-SWA-Lab/RSqueak", "url": "https://github.com/HPI-SWA-Lab/RSqueak/pull/73" }
gharchive/pull-request
Add file prim stubs and fix primitiveFileWrite primitiveFileWrite should support strings and word arrays. Also added primitives for truncating files and primitive for setting mac file type (stub only, no implementation). This won't translate on windows (which is missing os.ftruncate), you have to guard this and do something else on that platform (I think we dealt with that issue in Topaz, you may be able to just copy the code) Besides those minor comments, this can be merged Somebody have a clue why this is still failing on Travis? Works on my machine (OS X). It's a pypy head issue - just don't update your pypy checkout for now ;)
2025-04-01T04:10:28.214651
2019-05-13T05:01:23
443205985
{ "authors": [ "wondervictor", "xunhen" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:14418", "repo": "HRNet/HRNet-Object-Detection", "url": "https://github.com/HRNet/HRNet-Object-Detection/issues/10" }
gharchive/issue
Question about SyncBN of Mutil-GPU Hello! Did you use syncBN during training this HRNet? I find that the mmdetection had not achieved the function of syncBN in the version you are using. Well, our released code does not support SyncBN. All models are trained with fixed batch normalization. If you want to train HRNets with SyncBN, using SyncBN in PyTorch 1.1 might be a great choice. (Just changing all BatchNorm into SyncBatchNorm) Thank you very much!
2025-04-01T04:10:28.226619
2015-11-07T18:42:21
115684355
{ "authors": [ "dpaquette", "shawnwildermuth" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:14419", "repo": "HTBox/allReady", "url": "https://github.com/HTBox/allReady/pull/184" }
gharchive/pull-request
Force SQL Server Usage Removed InMemory option for DB to force more mileage on the Database usage. Changed Sample Data configuration to be in a container. If you have a problem running migrations, delete your database first. See me if you have questions. This looks good. Ready to merge once the unit tests are fixed :shipit:
2025-04-01T04:10:28.246823
2022-07-24T07:44:48
1315835332
{ "authors": [ "Chaitalishetty", "Soumya-Kushwaha", "vedsom" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:14420", "repo": "HackClubRAIT/Wizard-Of-Docs", "url": "https://github.com/HackClubRAIT/Wizard-Of-Docs/issues/140" }
gharchive/issue
Reverse a Linked List Description A complete doc about reversing a linked list using cpp Programming language [ ] C [x] C++ [ ] Java [ ] Python Are you contributing under any open-source program ? HackClubRAIT Please assign me @Chaitalishetty @Soumya-Kushwaha Assigning you this issue. Happy Coding! @Chaitalishetty can you assign me this issue??
2025-04-01T04:10:28.271634
2024-01-02T18:04:41
2062766733
{ "authors": [ "danilofuchs", "fsuk", "hueter", "iernie", "martijnrusschen", "yuki0410-dev" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:14422", "repo": "Hacker0x01/react-datepicker", "url": "https://github.com/Hacker0x01/react-datepicker/issues/4433" }
gharchive/issue
Upgrade to date-fns version 3 There are some mild breaking changes in date-fns version 3, particularly around imports. Codebases that use date-fns and react-datepicker are blocked from upgrading date-fns until react-datepicker updates this dependency. I could use some help to upgrade this in our package. I'm preparing a major upgrade with some breaking changes and could include this as well ideally. Tried to get the project running so I could look at contributing however encountered an issue getting the project to build (something about command 'tee' not found). When I migrated some of my own code it was just a case of updating the format of the import statements which I was able to do with a regex find and replace. e.g. import parse from "date-fns/parse"; -> import {parse} from "date-fns/parse"; I have also found a few places where there are imports from the root of date-fns, these need replacing import { isSunday } from "date-fns" -> import { isSunday } from "date-fns/isSunday " I will have another go at getting the project running next week. Awesome, sounds good. I'll be here to review the PR once you have it ready. Tried to get the project running so I could look at contributing however encountered an issue getting the project to build (something about command 'tee' not found). When I migrated some of my own code it was just a case of updating the format of the import statements which I was able to do with a regex find and replace. e.g. import parse from "date-fns/parse"; -> import {parse} from "date-fns/parse"; I have also found a few places where there are imports from the root of date-fns, these need replacing import { isSunday } from "date-fns" -> import { isSunday } from "date-fns/isSunday " I will have another go at getting the project running next week. Are there any advantages to importing from the subfolder instead of the root package itself now that everything are named exports? The usage doc themselves show import { isSunday } from "date-fns"; and you would save on multiple lines of imports from the same package. Closed via https://github.com/Hacker0x01/react-datepicker/commit/68236d8d82d2e1f4bf94252a7acef4800d027637? ( cc. @martijnrusschen )
2025-04-01T04:10:28.272620
2019-11-05T10:06:32
517671477
{ "authors": [ "yncat" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:14423", "repo": "Hacker0x01/react-datepicker", "url": "https://github.com/Hacker0x01/react-datepicker/pull/1960" }
gharchive/pull-request
Add ariaLabelledBy to the documentation I added ariaLabelledBy to the list of available props. I should have done this in the first PR. Sorry about that. Oops, duplicate commits. Will reopen shortly.
2025-04-01T04:10:28.286857
2016-11-09T10:55:28
188218431
{ "authors": [ "Rimap47", "ShraddhaDevaiya", "VBQL", "atainter", "delahoya1", "liquid1", "pallense", "rosegoldthakur" ], "license": "mit", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:14424", "repo": "HackerHouseYT/Smart-Mirror", "url": "https://github.com/HackerHouseYT/Smart-Mirror/issues/31" }
gharchive/issue
weather Can you please let me know why I get these error ? And on the display only date/time and news comes up, nothing more. Traceback (most recent call last): File "smartmirror.py", line 158, in get_weather weather_obj = json.loads(r.text) File "/usr/lib/python2.7/json/init.py", line 338, in loads return _default_decoder.decode(s) File "/usr/lib/python2.7/json/decoder.py", line 366, in decode obj, end = self.raw_decode(s, idx=_w(s, 0).end()) File "/usr/lib/python2.7/json/decoder.py", line 384, in raw_decode raise ValueError("No JSON object could be decoded") ValueError: No JSON object could be decoded Error: No JSON object could be decoded. Cannot get weather. Show less You've added your API token, correct? thanks for your replay, I figured it out yesterday. So now its works just fine. But where does it get the news from ? I would like to see local news you know. You get news from a google rss feed using feedparser. To get your local news, find any rss feed and replace the google one. Also, you might want to remove the variables for country and only have feedparser read one rss. Thanks for your replay VBQL, I have replaced the rss feeds but gets following error: Traceback (most recent call last): File "smartmirror.py", line 231, in get_headlines headlines_url = "www.nrk.no/nyheter/toppsaker.rss" % news_country_code TypeError: not all arguments converted during string formatting Error: not all arguments converted during string formatting. Cannot get news. Which variables should I remove ? You should remove the news variables that contains the country codes, that seems to mess it up for me. This should be the code in the news class: class News(Frame): def init(self, parent, *args, **kwargs): Frame.init(self, parent, *args, **kwargs) self.config(bg='black') self.title = 'News' self.newsLbl = Label(self, text=self.title, font=('Helvetica', medium_text_size), fg="white", bg="black") self.newsLbl.pack(side=TOP, anchor=W) self.headlinesContainer = Frame(self, bg="black") self.headlinesContainer.pack(side=TOP) self.get_headlines() def get_headlines(self): try: for widget in self.headlinesContainer.winfo_children(): widget.destroy() feed = feedparser.parse("www.nrk.no/nyheter/toppsaker.rss") for post in feed.entries[0:5]: headline = NewsHeadline(self.headlinesContainer, post.title) headline.pack(side=TOP, anchor=W) Thank you VBQL, looks like that did the jobb. Can I ask you for 1 more thing ? Regarding the location. When i set the latitude and longitude, the code doesnt display the location and shows the weather degree unormal-ly high. And doesnt give any error either. If i remove the latitude and longitude strings, then the code shows " can not pin point location" and weather degree become normal. latitude = '60.408639' # Set this if IP location lookup does not work for you (must be a string) longitude = '5.227588' # Set this if IP location lookup does not work for you (must be a string) I also tried it without apostrophe (') First, you might want geolocation to be obtained from google maps, also, the default location is displayed in Fahrenheit. To change in celsuis, change the code from: return 1.8 * (kelvin_temp - 273) + 32 to return kelvin_temp - 273 riman47 how did you figure out the error? I downloaded the software ( package) and install them from the folder On Wed, Nov 30, 2016, 7:25 PM delahoya1<EMAIL_ADDRESS>wrote: riman47 how did you figure out the error? — You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/HackerHouseYT/Smart-Mirror/issues/31#issuecomment-264068937, or mute the thread https://github.com/notifications/unsubscribe-auth/AF7cNNULRu_YRDPbIwV6ioZraaiTXoHWks5rDj4vgaJpZM4KtZ6Q . rimap47?? how did you figure out the error? Which issue ? @Rimap47 how did you fix the Weather issue ? explain please. "add api token" did fix the issue. @Rimap47 I added api token still it showing this type of error Sent from my Xiaomi Redmi 3S using FastHub
2025-04-01T04:10:28.288243
2018-10-12T03:22:02
369384405
{ "authors": [ "Aniket965", "paaaron" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:14425", "repo": "Hacktoberfest-2018/Hello-world", "url": "https://github.com/Hacktoberfest-2018/Hello-world/pull/1758" }
gharchive/pull-request
Add Bools Conditionals Comparison Operators Add Bools Conditionals Comparison Operators This pull request contains conflicts. Please fix it and reopen
2025-04-01T04:10:28.313961
2019-04-16T03:10:43
433563858
{ "authors": [ "Vinhold", "emperorstarfinder", "kf6kjg", "life777eternal" ], "license": "BSD-3-Clause", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:14426", "repo": "HalcyonGrid/halcyon", "url": "https://github.com/HalcyonGrid/halcyon/issues/79" }
gharchive/issue
In-World Time did not change to PDT Hi, a friend of mine has Halcyon setup on his own server and the time in-world did not change to PDT, it's still on PST. Is there a way he can change that? (with a full grid restart maybe?) or is that a bug? Thank you. As far as I am aware there is no mechanism that automatically handles changing between PST and PDT in Halcyon. Ideally, it is better for each grid to have their own time based upon the time that the server clock on the hardware where the grid server is on shows. Unfortunately, the time for secondlife is hardcoded into the viewers through which should also be changed but that is something to take up with viewer devs. World in question is my own world. This problem has not been repeated in other Halcyon world installations, so may be unique to how I have the world set up or the servers time configuration. I have not yet had time to update to Halcyon 0.9.41-master. So I do not consider this to be a real issue at this time. I think this is working as intended: not that it is correct for the time to fail to transition, but that the operating system and .Net are the main layers in control of time, not so much Halcyon itself. If further investigation proves me wrong this can and should be reopened.
2025-04-01T04:10:28.334718
2023-12-11T20:09:08
2036441881
{ "authors": [ "djabif", "splitice" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:14427", "repo": "HalleyAssist/ion-range-calendar", "url": "https://github.com/HalleyAssist/ion-range-calendar/issues/24" }
gharchive/issue
Reset selected date Hi, how can I visually reset the selected dates? This is my ion-range-calendar <ion-range-calendar [ngClass]="{ 'single-date': dateRange.from === dateRange.to }" [(ngModel)]="dateRange" [options]="optionsRange" (selectStart)="onSelectStart($event)" [type]="type" [format]="'YYYY-MM-DD'"> </ion-range-calendar> And I have a click event to clear the date selection: this.dateRange = { from: undefined, to: undefined }; The calendar dateRange is set to undefined but the calendar UI is not updated. Thanks Hi @Googlproxer thank you so much for updating the library :) I see you have a fix in 17.0.4 to allow undefined values but you didn't publish that version. Latest version on npm is 17.0.3, do you have any estimate when you will be publishing it? Thanks again for all your work. This is but a fork. We are not the official maintainers. Thank you very much for the updated. Reset works perfect now ❤️
2025-04-01T04:10:28.339774
2017-05-25T19:14:57
231431942
{ "authors": [ "KunoichiZ", "jpmac26" ], "license": "WTFPL", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:14428", "repo": "Hamcha/lumaupdate", "url": "https://github.com/Hamcha/lumaupdate/issues/80" }
gharchive/issue
Hourly build hash only displays 7 characters instead of 8 It also might be ignoring leading zeroes in the hash string, because I've seen it cut off the first character when it's a zero, but more recently I've noticed it cutting off the last character. However, the hash for the currently installed version seems to list the hash correctly. Also, I know the following screenshot is from version 2.1 which is a fork of this repo, but I know for certain I've experienced this issue on lumaupdate v1.4.2 several times, so I figured it'd be most appropriate to mention the issue here. That will be fixed in my fork.