added
stringdate
2025-04-01 04:05:38
2025-04-01 07:14:06
created
timestamp[us]date
2001-10-09 16:19:16
2025-01-01 03:51:31
id
stringlengths
4
10
metadata
dict
source
stringclasses
2 values
text
stringlengths
0
1.61M
2025-04-01T04:10:12.631164
2024-01-10T21:11:27
2075211612
{ "authors": [ "Avery-Dunn", "g2vinay" ], "license": "mit", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:13577", "repo": "AzureAD/microsoft-authentication-library-for-java", "url": "https://github.com/AzureAD/microsoft-authentication-library-for-java/issues/773" }
gharchive/issue
[Bug] Managed Identity doesn't cert thumbprint from Service Fabric MI env. Library version used 14.4.2-beta Java version JDK 17 Scenario ManagedIdentityClient - managed identity Is this a new or an existing app? None Issue description and reproduction steps Currently Managed Identity doesn't read the IDENTITY_SERVER_THUMBPRINT for Service Fabric MI environment. This cert needs to be trusted by the client to pass TLS validation when request is sent to the service fabric MI endpoint. Azure SDK for Java logic for the same can be found here Relevant code snippets No response Expected behavior No response Identity provider Microsoft Entra ID (Work and School accounts and Personal Microsoft accounts) Regression No response Solution and workarounds No response This is a duplicate of https://github.com/AzureAD/microsoft-authentication-library-for-java/issues/758 (which links to https://github.com/AzureAD/microsoft-authentication-library-for-dotnet/issues/4462 in the .NET repo, with more info and a proposed solution)
2025-04-01T04:10:12.640697
2022-03-15T12:34:22
1169615331
{ "authors": [ "mylle", "samuelkubai" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:13578", "repo": "AzureAD/microsoft-authentication-library-for-js", "url": "https://github.com/AzureAD/microsoft-authentication-library-for-js/issues/4604" }
gharchive/issue
HttpClient call requires login Core Library MSAL.js v2 (@azure/msal-browser) Core Library Version 2.13.1 Wrapper Library MSAL Angular (@azure/msal-angular) Wrapper Library Version 2.0.0-beta.3 Description Hi, I have an angular website that uses the MSAL to let colleagues login and ask company data. The calls to the endpoint must be authenticated (as it is company data). (endpoint example: api.company.be/aaa) This all works. Now I need to create an page for external people who need to see other data. I created an unauthorized endpoint on our company API and a page on the website. (endpoint example:: api.company.be/bbb) I made sure the specific URL doesnt require a login so unauthorized people can visit the page. This also works. But now I am stuck when trying to get the data from my unauthorized endpoint. Everytime I try to use an httpClient call, I'm getting redirect to a login page (which I don't want). Is there a way to fix this problem? FYI: I am not a angular expert MSAL Configuration export function MSALInstanceFactory(): IPublicClientApplication { return new PublicClientApplication({ auth: { clientId: '3454d924-4412-4505-9af6-XXXXXX', authority: 'https://login.microsoftonline.com/510df548-a539-40b9-8716-eb24afaccf1f', redirectUri: environment.MSAL_redirectUri, postLogoutRedirectUri: environment.MSAL_postLogoutRedirectUri, }, cache: { cacheLocation: BrowserCacheLocation.LocalStorage, storeAuthStateInCookie: isIE, }, system: { loggerOptions: { loggerCallback, logLevel: LogLevel.Warning, piiLoggingEnabled: false } } }); } export function MSALInterceptorConfigFactory(): MsalInterceptorConfiguration { const protectedResourceMap = new Map<string, Array<string>>(); protectedResourceMap.set(environment.API_URL, ['api://f1c2d297-0e55-4c4f-9a97-XXXXX/api-access']); return { interactionType: InteractionType.Redirect, protectedResourceMap }; } export function MSALGuardConfigFactory(): MsalGuardConfiguration { return { interactionType: InteractionType.Redirect }; } Relevant Code Snippets No response Identity Provider Azure AD / MSA Source Internal (Microsoft) Hi @mylle, thanks for reaching out to us. It looks like you are still using the beta version of msal-angular, whereas msal-angular is now on version 2.1.2. It would be great if you could upgrade. On the matter at hand, is endpoint set in the protectedResourceMap (environment.API_URL) the same as the authorized and unauthorized endpoint you are trying to use? (I.e. is environment.API_URI api.company.be) If it was the same, then whenever you makes an HTTP call, the msal-interceptor will intercept that call and try to access a token and prompt for login if there is no token, thinking that that endpoint has been protected. In this situation, you could either make the protectedResourceMap entry more specific (only have api.company.be/aaa) or unprotect the /bbb endpoint by adding protectedResourceMap.set('api.company.be/bbb', null); which would unprotect the route you do not want to prompt for login. You can get more information on the msal-interceptor here Hi @samuelkubai , I want to let you know everything worked like you explained! I changed my code to this: export function MSALInterceptorConfigFactory(): MsalInterceptorConfiguration { const protectedResourceMap = new Map<string, Array<string>>(); // No verification on this URL: protectedResourceMap.set(environment.API_URL + '/external/*', null); // Verification: protectedResourceMap.set(environment.API_URL, ['api://f1c2d297-0e55-4c4f-9a97-f327b500e832/api-access']); return { interactionType: InteractionType.Redirect, protectedResourceMap }; } Thank you very much for the explanation! Kind regards, Niels
2025-04-01T04:10:12.647011
2024-01-10T18:16:23
2074920833
{ "authors": [ "B-2-K" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:13579", "repo": "B-2-K/kdu-coursework", "url": "https://github.com/B-2-K/kdu-coursework/pull/3" }
gharchive/pull-request
implemented hms Implemented hospital management system Metric Value alert_status OK bugs 0 code_smells 0 reliability_rating 1.0 security_rating 1.0 vulnerabilities 0
2025-04-01T04:10:12.650493
2024-06-25T14:30:05
2372857677
{ "authors": [ "gifflet" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:13580", "repo": "B00TK1D/copilot-api", "url": "https://github.com/B00TK1D/copilot-api/pull/8" }
gharchive/pull-request
Enhance Copilot Integration with Improved Message Handling and Token Validation This PR enhances the Copilot integration by introducing the following changes: Modified the copilot function to accept system_message and user_message parameters. Updated the request structure and headers to align with the latest Copilot API specifications. Implemented token validation with is_token_invalid and extract_exp_value functions to ensure the token is always valid. Introduced the MODEL constant to specify the model version and messages list to manage conversation history. I understand. It's appropriate to close this PR. Furthermore, I will be opening a PR in the B00TK1D/freegpt repository with this improvement, and a new one in this repository containing only the token validation.
2025-04-01T04:10:12.671448
2017-02-08T01:22:18
206067918
{ "authors": [ "ccoldwell", "mark-a-wilson", "paulroberts68", "watkinspd" ], "license": "apache-2.0", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:13581", "repo": "BCDevExchange/devex", "url": "https://github.com/BCDevExchange/devex/issues/61" }
gharchive/issue
Icon/Picture not showing in IE The icons for the program logo's and users are not showing in IE 11 (and so presumably in government EI instances). @ccoldwell looks like the file type is being stripped off. When we add .jpg to the file name it shows. strongly dislike IE fixed! way to go 👍
2025-04-01T04:10:12.684465
2015-07-14T19:21:46
95018274
{ "authors": [ "benedictpaten", "cket" ], "license": "apache-2.0", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:13582", "repo": "BD2KGenomics/slugflow", "url": "https://github.com/BD2KGenomics/slugflow/issues/167" }
gharchive/issue
User script hot deployment not functioning WARNING:jobTree.leader:2fa1bc1d-7dec-4b88-814f-42bfcc159a2b: ---JOBTREE WORKER OUTPUT LOG--- WARNING:jobTree.leader:2fa1bc1d-7dec-4b88-814f-42bfcc159a2b: Traceback (most recent call last): WARNING:jobTree.leader:2fa1bc1d-7dec-4b88-814f-42bfcc159a2b: File "/usr/local/lib/python2.7/dist-packages/jobTree/worker.py", line 267, in main WARNING:jobTree.leader:2fa1bc1d-7dec-4b88-814f-42bfcc159a2b: jobStore=jobStore) WARNING:jobTree.leader:2fa1bc1d-7dec-4b88-814f-42bfcc159a2b: File "/usr/local/lib/python2.7/dist-packages/jobTree/target.py", line 714, in _execute WARNING:jobTree.leader:2fa1bc1d-7dec-4b88-814f-42bfcc159a2b: returnValues = self.run(fileStore) WARNING:jobTree.leader:2fa1bc1d-7dec-4b88-814f-42bfcc159a2b: File "/usr/local/lib/python2.7/dist-packages/jobTree/target.py", line 818, in run WARNING:jobTree.leader:2fa1bc1d-7dec-4b88-814f-42bfcc159a2b: userFunction = self._getUserFunction() WARNING:jobTree.leader:2fa1bc1d-7dec-4b88-814f-42bfcc159a2b: File "/usr/local/lib/python2.7/dist-packages/jobTree/target.py", line 791, in _getUserFunction WARNING:jobTree.leader:2fa1bc1d-7dec-4b88-814f-42bfcc159a2b: return getattr(importlib.import_module(userFunctionModule.name), self.userFunctionName) WARNING:jobTree.leader:2fa1bc1d-7dec-4b88-814f-42bfcc159a2b: File "/usr/lib/python2.7/importlib/init.py", line 37, in import_module WARNING:jobTree.leader:2fa1bc1d-7dec-4b88-814f-42bfcc159a2b: import(name) WARNING:jobTree.leader:2fa1bc1d-7dec-4b88-814f-42bfcc159a2b: ImportError: No module named testUserScript WARNING:jobTree.leader:2fa1bc1d-7dec-4b88-814f-42bfcc159a2b: Exiting the worker because of a failed job on host ip-172-31-21-198 WARNING:jobTree.leader:2fa1bc1d-7dec-4b88-814f-42bfcc159a2b: ERROR:main:Exiting the worker because of a failed job on host ip-172-31-21-198 WARNING:jobTree.leader:2fa1bc1d-7dec-4b88-814f-42bfcc159a2b: WARNING:jobTree.job:Due to failure we are reducing the remaining retry count of job 2fa1bc1d-7dec-4b88-814f-42bfcc159a2b to 0 There are multiple implementations of startJobTree(). Only the stack.startJobTree() calls getUserScript, while the target.Runner.startJobTree() does not take user script into account at all, so it defaults to None. Why are there multiple implementations, and when should each be used? After adding the call to getUserScript() to target.runner.startJobTree(), it appears that hot deployment works successfully for the first task. However, on the child task I get an error when trying to load the job's pickle file. It appears that the user module is pickled in _serializeFirstJob(), but I don't see any similar functions for the subsequent jobs. I suspect that the user Module is just not being included in the subsequent job's pickle file, which causes them to try to import whatever the next token in the command is. There should be no stack class, the only way to run the jobTree should be through Target.Runner.startJobTree() On Tue, Jul 14, 2015 at 2:24 PM, Christopher Ketchum < <EMAIL_ADDRESS>wrote: There are multiple implementations of startJobTree(). Only the stack.startJobTree() calls getUserScript, while the target.Runner.startJobTree() does not take user script into account at all, so it defaults to None. Why are there multiple implementations, and when should each be used? — Reply to this email directly or view it on GitHub https://github.com/BD2KGenomics/slugflow/issues/167#issuecomment-121393613 . I have been testing with a script using functions as target. When trying to load the target on the worker, the "userModule" is reported as jobtree.target, and the "targetClass" is TargetFunctionWrappingTarget. This working for the very first task, but failing with the message "ImportError: No module named helloWorld" when trying to unpickle on the subsequent task. It appears as though the user module is not added to the path until the target is being run, but it first has to be unpickled , which requires that it be on sys.path.
2025-04-01T04:10:12.719331
2023-05-04T06:07:02
1695319498
{ "authors": [ "ZeroLi-Bio", "junhouhui" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:13583", "repo": "BGIResearch/stereopy", "url": "https://github.com/BGIResearch/stereopy/issues/117" }
gharchive/issue
How to draw nGene on umap Hello developers, I want to draw the nGene and nCounts on the umap scatter plot, I tried using: data.plt.umap(cluster_key='n_genes_by_counts', res_key='umap') data.plt.umap(cluster_key='n_counts', res_key='umap') But they didnt work. Is there a function to do this? Thank you so much! Hello, i am sorry, but data.plt.umap function currently doesn't support plotting nGene and nCounts expressiond on Umap plots. Notice: this issue has been closed because it has been inactive for 14 days. You may reopen this issue if it has been closed in error.
2025-04-01T04:10:12.759998
2024-10-26T11:20:33
2615781343
{ "authors": [ "simoncarrignon" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:13584", "repo": "BIADwiki/BIADwiki", "url": "https://github.com/BIADwiki/BIADwiki/pull/1" }
gharchive/pull-request
Rewrite DB Connection Management This branch does a few things, all mainly related to connection and interaction with the database. Fixes a few issues (connections problem if wrapper() and fnc() ; when used locally different port where used -- minor) Make some functions slighlty quickier by avoiding opening and closing db connection before and after each query (makeing almost useless now to send quieries via ssh) Make the code a bit (but just a bit) safer more robust, by handling credentials through environment variable. New methods and functions for remote interactions Function init.conn() In this branch connections are handled by the function init.conn(). This return a connection handler that can be used by all the other functions. The function query.database, who send commands and return results as dataframe, is now calling init.conn() if no connector has been passed along. In this case, the connector obtained is assigned to conn at the .GlobalEnv level ; and is thus available everywhere in the project. Credential can be manually passed to init.conn() via the parameter db.credentials. This should be a list of the form: db.credentials <- list( BIAD_DB_USER='dbuser´, BIAD_DB_PASS='dpass', BIAD_DB_HOST='dbhost', BIAD_DB_PORT=1235 ) If no credentials are passed to init.conn() it will look for environmental variable (cf below) ; which are system wide; via Sys.getenv(). This allow to avoid having objects in the .Rprofile that are hanging around and may be overwritten without the user's knowledge. There are a lot of other options to handle credentials ; but so far that was what looked to me like the best one. Open to discussion (and more below)! In term of speed; i did a few benchmark in and out of the lab, using the very costly search function (due to the tree-like/recursive nature of the search). Results (below) show that by keeping the connection open, the difference between interacting remotely with the database vs sending a script to run the command on the server is small. In contrast, having everyone/anyone able to ssh to the server is something that, I think, will have to be remove at some point. Open to discussion here too ; but I think that this is a good tradeoff in term of securing and speed, getting us closer to an 'api-like' project. currentres=run.server.searcher(table.name = 'Sites', primary.value = 'S10050') microbenchmark::microbenchmark(run.server.searcher(table.name = 'Sites', primary.value = 'S10050'),times=5) # --- at the lab: #Unit: seconds # expr min lq mean median uq # run.server.searcher(table.name = "Sites", primary.value = "S10050") 2.323519 2.333059 2.408235 2.359926 2.496402 # max neval # 2.528267 5 # --- outside the lab: #Unit: seconds # expr min lq mean median uq # run.server.searcher(table.name = "Sites", primary.value = "S10050") 2.574199 2.777513 3.008144 2.852191 3.120371 # max neval # 3.716444 5 conn <- init.conn(db.credential=list(BIAD_DB_USER="simon carrignon",BIAD_DB_PASS="simon arrignon",BIAD_DB_HOST="<IP_ADDRESS>",BIAD_DB_PORT=3307)) remoteres=run.searcher(table.name = 'Sites', primary.value = 'S10050', conn = conn ) microbenchmark::microbenchmark(run.searcher(table.name = 'Sites', primary.value = 'S10050',conn = conn),times=5) # --- at the lab: #Unit: seconds # expr min lq mean median uq max # run.searcher(table.name = "Sites", primary.value = "S10050") 3.648775 3.700076 4.016982 3.966496 4.340351 4.429214 # neval # 5 # --- outside the lab: #Unit: seconds # expr # run.searcher(table.name = "Sites", primary.value = "S10050", user = user, password = password, host = host, port = ort) # min lq mean median uq max neval # 8.987218 9.038348 10.42252 9.338684 12.24129 12.50706 5 Credentials passed and saved via Environment Variables : Environment variables are set at the OS level. They allow to use the credentials outside of R (in python, in your fav shell, ssh,etc..). As they are OS-wide, and to avoid they overwrite other OS variables, I propose to use these new names: BIAD_DB_USER="your username" BIAD_DB_PASS="your password" BIAD_DB_HOST=<IP_ADDRESS> BIAD_DB_PORT=3306 #or something different if you specified a different port\n", The same is done for ssh related credentials: BIAD_SSH_PEM="/path/to/file.pem" BIAD_SSH_USER="biad" BIAD_SSH_HOST="macelab-server.biochem.ucl.ac.uk" bash users can thus add them in their ~/.bashrc but R can also automatically load environment variable using ~/.Renviron A ~/.Renviron will look like that: BIAD_DB_USER="your username" BIAD_DB_PASS="your password" BIAD_DB_HOST=<IP_ADDRESS> BIAD_DB_PORT=3306 #or something different if you specified a different port\n", BIAD_SSH_PEM="/path/to/file.pem" BIAD_SSH_USER="biad" BIAD_SSH_HOST="macelab-server.biochem.ucl.ac.uk" Whereas .bashrc will have: export BIAD_DB_USER="your username" export BIAD_DB_PASS="your password" export BIAD_DB_HOST=<IP_ADDRESS> export BIAD_DB_PORT=3306 #or something different if you specified a different port\n", export BIAD_SSH_PEM="/path/to/file.pem" export BIAD_SSH_USER="biad" export BIAD_SSH_HOST="macelab-server.biochem.ucl.ac.uk" With the envars set up in your .bashrc (or any config fav for your shell ) you can connect to the database without R like this: mysql -u "${BIAD_DB_USER}" -P $BIAD_DB_PORT -h $BIAD_DB_HOST -p"$BIAD_DB_PASS" biad or use ssh via: ssh -i $BIAD_SSH_PEM $BIAD_SSH_USER@$BIAD_SSH_HOST looks barbarian but trust me, it's nice! One direct advantage is that it allows to send ssh commands in a slightly easier and safer way: we don't have to copy the credentials in the R script, we can pass them directly to the shell. No need to write credential in the script. I also drafted function that leverage this to do only ssh system call (ie not run by R but by the operating system) ; instead of the ssh package. That illustrate well the usefulness of envars: run.server.query.inner.alt <- function(scriptname){ commands <- c( paste("cd",tmp.path), paste("/Library/Frameworks/R.framework/Resources/bin/R CMD BATCH --no-save",scriptname," > tmp.Rout"), "cd .." ) tmp.path <- tempfile(pattern = "tmpdir") linkcred="-i ${BIAD_SSH_PEM}" host="${BIAD_SSH_USER}@${BIAD_SSH_HOST}" #we rely again on the ENV var system(paste("ssh", linkcred, host, shQuote(paste("mkdir -p", tmp.path)))) sourcefold=here::here("R") #link to the source files filestosend <- paste(scriptname,file.path(sourcefold,"function*.R")) #send the script and all source file system(paste("scp", linkcred,filestosend, paste0(host,":",tmp.path,"/"))) res <- c("tmp.RData","tmp.Rout") res <- sapply(res,function(fn)paste(tmp.path,fn,sep="/")) system(paste("ssh", linkcred, host, shQuote(paste(commands, collapse = "; ")) ,collapse=" ")) dl <- sapply(res,function(fn)system(paste("scp", linkcred, paste0(host,":",fn),"."))) if(file.exists("tmp.RData")){ load('tmp.RData') unlink(c('tmp.RData','tmp.Rout')) } else{ query <- NULL na <- sapply(readLines("tmp.Rout"),function(i)cat(i,"\n")) unlink('tmp.Rout') warning('sql command failed') } } The function (not used yet in this branch, but left as a draft/illustration) doesn't need to know anything about the credentials, except the names of the variable. The value are only dealt with at the OS level. NOTE: The function init.conn() is still looking for .GlobalEnv variables to initialise the connection if it doesn't find any envar or the list db.credentials hasn't been given ; so in theory it's still possible to login with .Rprofile credentials. Other minor changes Sending script via SSH: run.server.query.inner/run.server.query. and run.server.searcher have been adapted to account for the new credentials. A few minor change have also been introduce ; the use of tempfile(pattern = "tmpdir") is introduced to generate temp directory where files are stored and the output of the script run is shown to user if no data is returned. The core files providing all functions (functions.R and functions.database.connection.R) are not downloaded from github anymore ; but sent from the computer doing the request. This should disappear on the long run, when the project become a package, as the package will be installed on BIAD and thus there will be no need of sending anything ; simply loading the package will do. Divers I may have changed a few minor things here and there that are not listed here, but they shouldn't impact anything. Call to external library (mainly DBI and ssh) are indicated using library_name::function() convention . Will help avoiding some confusion The branch tries to use init.comm() and the conn and db.credentials arguments in a consistent way within all the functions that potentially need to query the database (via the query.database function). This is making the whole thing a bit more robust to arguments/variable overwriting (for example in wrapper function). The query.inner.database has been deleted, warnings are now only suppressed at the level of the DBI::dbSendStatment calls in query.database. A few tryCatchs have been added, to help users handling connections errors. The branch starts to implement Roxygen comments. This will allow to automatically generate documentation if/when the repo becomes a package. They can look like that (for a very extended version): #-------------------------------------------------------------------------------------------------- #' Initialize Database Connection #' #' Establishes a connection to the BIAD database using provided credentials or environment-sourced credentials. #' If no credentials are supplied, the function attempts to retrieve required details from environment variables. #' #' @param db.credentials A list containing database connection details, specifically: #' - `BIAD_DB_USER`: The database username. #' - `BIAD_DB_PASS`: The database password. #' - `BIAD_DB_HOST`: The host address of the database. #' - `BIAD_DB_PORT`: The port number for the database connection. #' If not defined, these values are fetched from the environment variables set in your `~/.Renviron` #' or in your `~/.bashrc` #' #' @return A DBI connection object to the MySQL database. #' #' @seealso \code{\link[DBI]{dbConnect}} for more details on database connections. #' #' @examples #' \dontrun{ #' # Connect using environment variables: #' conn <- init.conn() #' #' # Connect using explicitly provided credentials: #' db.credentials <- list( #' BIAD_DB_USER = "my_user", #' BIAD_DB_PASS = "my_password", #' BIAD_DB_HOST = "localhost", #' BIAD_DB_PORT = 3306 #' ) #' conn <- init.conn(db.credentials) #' } #' #' @export (and we obviously keep the #-------- separators) Ending Notes That was just a few things that appears while I was playing with the code to explore the Sites table. It's far from perfect or good, so need to be reviewed a bit. Not all propositions have to be accepted (I should have split that in multiple branches/patches but with these connections things everything is so interconnect that it grew a bit organically). I also have no preference regarding formatting and variable naming/indentation. I thing at some point will be worth having a pull request template and guideline to be sure we all do the things the same way. I realise that I had a few changes that I have done in my personal development branch that I did long after this pull request to help handling the connection (and would have soved some of the problem we found today (19/11/24) @AdrianTimpson, but wasn't part of the pull request to avoid puting to much on it. Most of it solve some problems related to the run.server.search and run.search functions, to be sure they are fully compatible with the new connection . I also introduced I a check.conn function that allow to deal with connection check/disconnection on the side. I pushed them in . It should change nothing to the other things we see, just make query.database more readable, and allow to handle connector in other context (I needed to be able to check and re-connect separatly). So basically it wrap up test that were in query.database in a separate function. It takes a connector and pass it through these tests to see if it the connector still works. if it doesn't, it will close it and re-assign to a valid one. They should be all in after commit 79f23cdd39f1e1c1672a3b1d988a642d9bb6ad0b did a pull request too justto keep track of thing omore easily
2025-04-01T04:10:12.763506
2024-05-07T23:00:01
2284356222
{ "authors": [ "Artur-man", "wcko" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:13585", "repo": "BIMSBbioinfo/VoltRon", "url": "https://github.com/BIMSBbioinfo/VoltRon/pull/91" }
gharchive/pull-request
as.AnnData function to support using anndataR in addition to anndata Adding support for anndataR just a quick thing, adding anndataR as a suggestion to the DESCRIPTION file. So its good to go.
2025-04-01T04:10:12.772484
2022-04-01T20:41:11
1190251901
{ "authors": [ "mattaebersold", "weotch" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:13586", "repo": "BKWLD/cloak-customer", "url": "https://github.com/BKWLD/cloak-customer/issues/1" }
gharchive/issue
PageMixin Investigate to see if I need this still account pages shouldn't be indexed, so there should be no seo related needs I'm ok dropping this for now. Right now it does two main things: Create SEO tags Support Craft Live Preview Both aren't really a thing for account pages. I'm not really sure yet where I want the page-mixin to live in the distributed Cloak 2.0 FYI, I've decided that the pageMixin will ultimately live in @cloak-app/craft (which doesn't exist yet).
2025-04-01T04:10:12.797430
2021-05-10T22:37:13
885262923
{ "authors": [ "fire-rab-bit", "oconnor663" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:13587", "repo": "BLAKE3-team/BLAKE3", "url": "https://github.com/BLAKE3-team/BLAKE3/issues/172" }
gharchive/issue
Hash speed drops on XEON (hashing to standard length, 32 bytes) Dear Blake3 Team, BACKGROUND: My Coursework is about hashing speed. I tested Blake3, Shake and K12 with different inputs and outputs. input sizes tested: 1KB till 10GB; output sizes tested: standard output length (i.e. 32 bytes) till 1 GB, dumped as HEX. The speedtest was done on desktop Intel i7 x64, mobile ARMv8, and server on Xeon x64. Each hash operation was repeated 10 times and the average time was calculated to avoid unwanted breakouts during the measurement. ISSUE: For Blake3, On XEON I got very confusing speed results, which i do not understand yet. Could you please support me in undestanding the issue better? I've found out, that hashing to 32 byte std len takes significantly more time compared to sponge the output (e.g. to 500 bytes length). My script tested many constallations of input and output sizes and run for approx. 1,5 days in total on the server. MY RESULTS ON XEON: 1KB file input and 32 bytes outputs takes: 0.0124169347 seconds in average (10 times repeated) 1KB file input but 500 bytes output takes: 0.0023664945 seconds in average (10 times repeated) Hashing to 32 bytes output takes longer than hashing to 500 bytes output. Manual tests confirmed that. My scripts incrementally increases input and output constellations. The gap between 32 bytes and 500 bytes output, increases significantly the longer the script runs. Finally, when it came to a 10GB file: 10GB file input and 32 bytes output: 14.5859859031 seconds in average (10 times repeated) 10GB file input and 500 bytes output: 1.8065879838 seconds in average (10 times repeated). ... 10GB file input and 10000000 output: 2.0749627773 seconds in average (10 times repeated) ... MY COMMAND: b3sum $key_file -l $output_size > speedtest.txt CONCLUSIONS? I could imagine that the difference is increasing on XEON, because i am running each constallations 10 times and calculating the average, and seems that the XEON cpu does not like this .. Manual tests confirmed a gap, BUT not in this scale. However on ARM and Intel i7 there is also a gap between 32 bytes and 500 bytes output visible, but comparably very tiny (approx. 150 milliseconds in average). And it does not matter if a 1KB input is hashed or 10 GB, the gap remains at a very tiny range of approx 150 milliseconds. If i run the script the manually for the 10GB file input >> the gap is much smaller than above. I can imagine that the issue above occurs when hashing for a longer period of time. HINT: Please be aware: when you screen the results-blake3-XEON.txt. For the first 2 lines "4K" file output is stated. This is just an display issue. SETUP, RAW RESULTS: Script in use: https://korac.info/speedtest-blake3.sh Speedtest protocol for blake: https://korac.info/results-blake3-XEON.txt System – x64 architecture - XEON based cloud server (dedicated) Intel Xeon E31230; 4 cores @ Max. 3200 MHz GPU none RAM 4 GB Ubuntu 20.04 2TB HDD I would be glad to get in a conversation with you about this observation. If you have any questions, you are very welcome and I will provide more info immediately, I can provide you my script and the full protocol. Cheers, Pille I skipped the 32 byte output step and directly output to 500 bytes. To me, this was the most mysterious part of your results, the fact that extracting more output was faster (and actually dramatically faster) than extracting less output. I'd love to dig to the bottom of it. For example, if you could give me a script that just reproduces this result in your environment, I'd love to try it in my environment: 1KB file input and 32 bytes outputs takes: 0.0124169347 seconds in average (10 times repeated) 1KB file input but 500 bytes output takes: 0.0023664945 seconds in average (10 times repeated) Looking at speedtest-blake3.sh more carefully, I think one thing I'd change is that I wouldn't write the output to disk. So like instead of: b3sum $key_file -l $output_size > speedtest.txt I'd write: b3sum $key_file -l $output_size > /dev/null The issue here is that disk writing is much more expensive than disk reading, and it means that you're always touching the disk itself, rather than just reading cached files from RAM after your first loop iteration. The speed at which b3sum's output can be written to disk isn't really what you want to benchmark, since it's purely a function of your filesystem and your disk and doesn't really have anything to do with b3sum. What you want to benchmark might be how quickly b3sum can produce output, and you'll get a more meaningful number there by redirecting to /dev/null. (Also be aware that the output is hex encoded unless you use --raw, which doubles the number of bytes emitted and then adds a newline.) Dear O'Connor, i have adapted the script with the comments you made. I forwarded the output to /dev/null, without no difference. You can find the script you wish here and try on your own: https://korac.info/blake3-retests/retest-blake3.sh You can choose your own file for tests or create same as mine: dd if=/dev/zero of=key_8GB.file bs=1024 count=8000000 I also called b3sum manually and wrote to /devl/null. As you see, running the script directly twice, reduces the hashing time significantly. **root@s82493:~/retest# ./retest-blake3.sh Input key_10GB.file Output 32 AVERAGE 9.03456048580000000000 Input key_10GB.file Output 500 AVERAGE 6.60389203540000000000 root@s82493:~/retest# ./retest-blake3.sh Input key_10GB.file Output 32 AVERAGE 1.39118891920000000000 Input key_10GB.file Output 500 AVERAGE .19507304710000000000** In an other manual test, i changed the output size alternately. You can see a continous decrease of hashing. But at the end, the 32 output is slower, see https://korac.info/blake3-retests/8_retest-manually-blake3-XEON-point_to_DEV_NULL.txt for example. CONCLUSION: In general I think it is not the case, that 32 bytes is slower than 500 bytes. The case is: the first is slower, all hashing operations after are faster (due to caching?). I narrow it down, that the issue here with READING and caching by the Hard Disk Drive on the tested XEON system. When HDD are in use the hashing time is very volatile. BTW: I checked also the results against my X64 laptop and my ARM. The first uses SSD and the second eMMC (Galaxy Tab S5e). In both systems only a very very tiny gap is visible. So therefore this supports the argument, that the identified gap is mainly when reading and chaching due to use of HDD. FULL RESULTS FOR OTHER PLATFORMS: x64 Zbook i7 are: K12: https://korac.info/ARM-Tab-S5e/results-k12-ARM-all.txt Shake: https://korac.info/ARM-Tab-S5e/results-shake-ARM-all.txt Blake3: https://korac.info/ARM-Tab-S5e/results-blake3-ARM-all.txt for ARM, Tab S5e are: K12: https://korac.info/x64-i7-zBook-360-G5/k12-results.txt Shake: https://korac.info/x64-i7-zBook-360-G5/shake-results.txt Blake3: https://korac.info/x64-i7-zBook-360-G5/speedtest-results-blake3-zBook.txt The case is: the first hash operation is slower, all hash operations after are faster Ah, I think you might've been telling me this above when you said "same result", but then I might've understood. And yes, this is pretty much what we'd expect. One good generic technique for dealing with these problems, is to always "throw away your first run". That is, if you're timing a loop of 10 iterations, ignore the first one and only average the times from the last 9. That's often enough to get rid of simple caching effects like this one. Fancy benchmarking frameworks like Criterion.rs will do a few whole seconds of "warming up" runs, before they even begin to count time. (That might be enough to get your processor out of TurboBoost, for example, if you've left TurboBoost on.) I'll go ahead and close this issue, but feel free to comment with more questions or open new ones. Dear @oconnor663 i am in final stage of my coursework. Would it be ok for you to mention the interaction we had in my critical appraisal, like this draft below: _Critical appraisal for executed speed tests It was quite difficult to measure all constellations and isolate the influencing factors to reduce noise. It was necessary to adapt the speed tests several times during the coursework. At the beginning, each hash operation was executed only one time. It was identified that this approach has a high risk of noise. Therefore this results had to be withdrawn, which meant to withdrawn approx. 2 days of time. To reduce noise, it was necessary to repeat the measurement again and to execute each hash operation 10 times to calculate the average time. Secondly, after screening the results on system 3 (i.e. XEON with HDD), it was recognized that the first operation took always significantly longer. The first operation was hashing to a standard hash length of 32 bytes. But the second operation was hashing to 500 bytes and took only a fraction of the previous one. For example, Blake3 hashed a 10GB file to 32 byte output in 14.59 seconds. Directly after this, the same file was hashed to 500 bytes in 1.80 seconds. So the second operation needed 12.79 seconds less. This behavior was quite strange at the first look. O’CONNOR from the Blake3-Team was asked for support to understand the behavior; see issue 172 on https://github.com/BLAKE3-team/BLAKE3. After several manual retests and assumptions, the reason for that behavior is most-likely the reading and data caching process when HDDs were in use. Each hash operation pointed to the same input file. The system works than with cached data if the next operation points to the same file (-name). This big difference (e.g. 12.79 seconds in case of 10 GB input) was only observed when HDDs were in use. If SSDs were in use, the difference was in most of all cases only a 10-200 milliseconds. In consequence all tests for 32 bytes output were repeated on the systems again to ensure comparability. Ideally, for each hash operation the file should have different names, even if they are same and of same size (e.g. “key_10GB.file.A”, “key_10GB.file.B”, etc.). By this, the tests would include the reading speed. However, this was not necessary for the coursework. As the coursework tried to compare the three hash functions together. The setup was the same for every hash function on each system. Therefore comparability between the hash functions is ensured and noise level is reduced. The hash result was written to file after each hash operation. Finally, it was discussed to write the output to /dev/null instead to disk. After manual retests it was identified that this would not solve the afore-mentioned problem and was therefore not implemented._ What do you think? If I do not get an feedback until tomorrow 14pm german time, i would not include your name, to be safe. Of course you can get a copy of the final / draft coursework, if you wish. I'm not sure what it is you're asking for here. Maybe let's take this part to email. You can reach me at<EMAIL_ADDRESS>
2025-04-01T04:10:12.858508
2023-10-19T16:08:15
1952616638
{ "authors": [ "alexjaffray", "cncastillo", "felixhorger", "mrikasper" ], "license": "BSD-3-Clause", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:13588", "repo": "BRAIN-TO/GIRFReco.jl", "url": "https://github.com/BRAIN-TO/GIRFReco.jl/issues/12" }
gharchive/issue
Reduction of strong dependencies in Project.toml I think the number of strong dependencies in the package needs to be reduced to a minimum to avoid compatibility problems, based on what you want the core package to do. As @felixhorger mentioned, one way of doing this is to separate the package into various simpler packages with minimal dependencies (which I am a fan of!). Another way of doing this, which is easier to implement, is to use package extensions. Having said that, there are some packages that ultimately should not be in the Project.toml of the main package, like Literate.jl and Documenter.jl as these correspond to the Project.toml of the docs (the same for the tests, and the generation of the figures for the paper). This comment also applies to MRIGradients.jl (whose name is a little confusing because it only appears to do GIRF correction). The main point that I want to come across is that if, as a user, I want to do only a GIRF-corrected recon, having my k-space, GIRF, and coils estimated my way, I do not expect a package called GIRFReco.jl to use DL tools like Flux.jl for B0 estimation and coil estimation, or to require Nifti.jl, MAT.jl, Unitful.jl, and others. In the specific case of Flux.jl, it may not be needed at all, and the gradient could be computed manually (I only saw it being used in pcg_ml_est_fieldmap and sens_smooth). Arguably, functions that do B0 and coil sensitivity estimation should not be part of this package at all. Going back to the package extensions, the idea is to define a function only if the package is explicitly imported and added as a weak dependency. A simple example is the function readGIRFFile (from MRIGradients.jl) that depends on MAT.jl (for some reason, MAT.jl is also a dependency of GIRFReco.jl?). Only if the user does this using MRIGradients, MAT Then you get the function readGIRFFile defined in /ext/MRIGradientsMATExt.jl as module MRIGradientsMATExt import MRIGradients using MAT # definition of readGIRFFile with more arguments, inside MRIGradients the function # should be defined as an empty function: function readGIRFFile end MRIGradients.readGIRFFile( ...) ... end end that should also be added to the Project.toml [weakdeps] MAT = "..." [extensions] MRIGradientsMATExt = "MAT" For the case of Flux.jl, if it is truly necessary, I think the best way of doing this would be to add Flux as a weak dependence of MRIFieldmaps.jl and MRICoilSensitivities.jl. In that way, if I want to do a field map estimation, I do using MRIFieldmaps # My code that uses MRIFieldmaps data = ... # read data somehow B0 = estimate_field_map(data, ..) #It is probably not called like this and if I want to use the functions defined with Flux using MRIFieldmaps, Flux # My code that uses MRIFieldmaps data = ... # read data somehow B0 = estimate_field_map(data, ..) #I can estimate B0 the "traditional way" B0 = estimate_field_map_flux(data, ..) #But also using Flux for the gradient! Hope this is useful, Cheers! Dear Carlos @cncastillo, Thank you so much for this detailed feedback, we will go through the dependencies carefully and see which ones can be dropped/transformed into weak dependencies. Dear Lars @mrikasper, I will reply to the JOSS comment here. First of all, I just want to clarify that this package has done a fantastic job, and I understand what a thankless job it is to publish an open-source package. Making open-source software useful for the community requires some painful, and not really publishable, laborious steps: documentation, testing, following the language guidelines, etc. I do not think the modularization/simplification of packages is Julia-specific or reconstruction-specific thing, just that people have realized that making a package more flexible and small (with fewer dependencies) makes it more useful and easier to maintain in the long run. This also makes packages easier to understand and increases the possibility of them being used (if properly documented). Just take a look at the package in NPM is-odd that has over 18M downloads. That happened with MRIReco.jl, which started as a big package to be then subdivided into useful simpler submodules, and my package KomaMRI.jl, which I also subdivided and I continue to do so to accommodate for more use cases (specifically moving IO to a submodule and GPU-related packages as a package extension). Both cases could be similar to yours, with an umbrella package that does everything in the pipeline, and specific packages that do each step, giving freedom to the user to install just a part of the pipeline. As you have improved the coil sensitivity and B0 map estimation from preexisting packages, those could be their own submodules. Besides the philosophical view about coding practices and beliefs, there are some practical benefits, maybe Julia-specific, of developing packages in this way. If a package updates and breaks one part of your code, that error is restrained within the module, making it easier to fix and patch. Moreover, it makes dealing with package compatibility much easier ([compat] section of Project.toml), and avoids problems with installation conflicts (I can selectively install what I need). Finally, it also has some performance benefits, as a package with fewer dependencies uses less disk space, installs faster, compiles faster, and loads faster (especially if you use a large package like CUDA). In my experience, every new dependency adds a new possible point of failure, and every test finds a new bug. Cheers! First of all, I just want to clarify that this package has done a fantastic job, and I understand what a thankless job it is to publish an open-source package. Making open-source software useful for the community requires some painful, and not really publishable, laborious steps: documentation, testing, following the language guidelines, etc. I can only repeat that! :) Thanks @felixhorger and @cncastillo, @mrikasper I will start going through the dependencies. Many of these dependencies arise from how we use the pipeline and package ourselves (i.e we use its Project.toml as a base environment for our reconstruction work and ad-hoc development, we should probably stop doing this). I think this should be straightforward to clean, I should have it done early this week. Relevant to this is also @mrikasper's comment in the JOSS review This is what our submission is mainly about (a pipeline for...) I think a reasonable thing to do is to modularise your pipeline into abstract steps, potentially outsource into separate packages (e.g. the B0 field estimation), and provide an overarching function which starts the whole pipeline, given the dictionary of parameters. It can then read from ISMRMRD and write into NIFTI files, exactly as it is now. I recommend to add one version of that function where arrays are provided instead of reading them from file, and arrays are returned instead of writing to NIFTI. Further, I recommend that all subparts of the pipeline follow the same principle of working on arrays rather than files. With this setup, beginners are happy because they have a plug and play version and advanced users are happy because it's modular and they can exchange parts of the pipeline and are not forced to provide data in a specific format or always write to disk. Happy to hear your opinions on this :) @felixhorger @cncastillo We’ve been thinking hard about how best to implement your suggestions surrounding packageization and dependencies into our pipeline. As our repo is intending to provide a complete pipeline for image reconstruction, further separating out into many small subpackages will atomize it, making it difficult to meet the needs of implementation of our application, and the needs of access of the potential users. There are already many subpackages in the MRI Julia ecosystem; our repo has tried the best to utilize these subpackages, and it may not be necessary for further compartmentalization. To provide a concise demo as quick-start for the new users, we need a comprehensive recipe or example of integrating everything together for real-world reconstructions. We have elected to pursue Option 1 as suggested in Issue #14 for reducing the dependencies and also to streamline the running of the examples. This will achieve the target of a significantly more lightweight GIRFReco.jl. Meanwhile, we also considered the suggestions of strictly using Arrays in our pipeline. For the purpose of reusing the currently available modules/classes, we chose to facilitate all k-space data with RawAcquisitionData and AcquisitionData types, which are provided by MRIBase in MRIReco.jl. They are well-built modules providing all necessary functionality methods and interfaces for processing k-space data, along with their metadata which is necessary for our pipeline. Furthermore, MRIReco.jl, one of the packages that our pipelines depends on, is providing the flexibility for users to generate their own coil sensitivities, B0 maps, etc. based on their own data. Our repo’s major target is to provide a pipeline or recipe through some brief demos, as well as the utility functions to glue the recipe together. The users are free to provide their own (coil and/or B0) maps, and to replace the processing methods outlined in the demos to extend them for their own use cases. Would this approach be alright? Option 1 sounds good to me! Using RawAcquisitionData and AcquisitionData is fine because MRIReco.jl made this kind of a standard. One point that I nonetheless want to flag is the utils folder (as the name suggests it's functionality GIRFReco.jl needs but doesn't fit into a clear category): utils/Utils.jl: this file contains quite diverse functionality, you already noticed that because you named it "Utils", i.e. it's hard to find a matching name (contains e.g. plotting, manipulating trajectory and raw data, high-level functionality for applying GIRF corrections). I think this should be easy to split into separate files within GIRFReco.jl. For example utils/plot.jl, ismrmrd.jl, and utils/applyGIRF.jl. utils/fieldMapEstimator.jl: I suggest you outsource that because it is quite self-contained (apart from estimateB0Maps() which has to remain in GIRFReco.jl) and it would be nice to have this functionality outside of GIRFReco.jl. This functionality probably has to go somewhere else eventually (MRIReco?), but for now I propose it's alright to put it in a separate package. Is there any barrier to this I'm not seeing? utils/variationalSmoother.jl: Same as before, it's self-contained and could go into another package, until the opportunity arises to incorporate it into e.g. MRIReco's MRICoilSensitivities.jl. I think doing that won't be a great effort (~1h?), and doesn't disagree with the major target of providing the pipeline as the main functionality but would have a positive effect on new users/developers of GIRFReco.jl. Happy holidays everyone! :) Hi! I hope everyone had a good Christmas 🎄. We have elected to pursue Option 1 as suggested in Issue https://github.com/BRAIN-TO/GIRFReco.jl/issues/14 for reducing the dependencies and also to streamline the running of the examples. This will achieve the target of a significantly more lightweight GIRFReco.jl. Option 1 sounds good to me too. There are already many subpackages in the MRI Julia ecosystem; our repo has tried the best to utilize these subpackages, and it may not be necessary for further compartmentalization. Regarding the package separation, I agree with @felixhorger that B0- and coil-related functions should not be in this package, and as said, the purpose of the package remains intact. Cheers! We have worked on this in #14 and have been able to remove quite a few dependencies, and are now in the process of consolidating our changes with the documentation workflow and writing tests for CI and coverage. We have also elected to remove the variational smoothing code from the repo. The off-resonance estimation code is still being thought about in terms of an appropriate home for it.
2025-04-01T04:10:12.906844
2016-09-18T22:29:20
177681662
{ "authors": [ "tnhu" ], "license": "mit", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:13589", "repo": "BZCoding/SkelEktron", "url": "https://github.com/BZCoding/SkelEktron/issues/1" }
gharchive/issue
Application menu is always shown as "Electron" By some reason, the application menu is always shown as "Electron" instead of application name (tested on Mac). Per https://stackoverflow.com/questions/36123823/electron-wont-read-application-name, app needs to be packaged.
2025-04-01T04:10:12.914989
2023-09-14T12:45:20
1896501522
{ "authors": [ "JackLau1222", "RoyaltyLJW", "taoboyang" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:13590", "repo": "BabitMF/BabitMF.github.io", "url": "https://github.com/BabitMF/BabitMF.github.io/issues/11" }
gharchive/issue
error in generatemode demo https://babitmf.github.io/docs/bmf/multiple_features/graph_mode/generatemode/ when run the demo in it, AttributeError happen, 'bmf.lib._bmf.sdk.Packet' object has no attribute 'to_ndarray' Hello, I'm really sorry. The code version in the document is the version we use internally and is relatively old. Please try the code I gave you first. The function is exactly the same and see if it can solve your problem. Document I will update the content later pkts = ( bmf. graph() .decode({'input_path': "../files/img.mp4"})['video'] .ff_filter('scale', 299, 299) # or you can use '.scale(299, 299)' .start() # this will return a packet generator ) for i, pkt in enumerate(pkts): # convert frame to a nd array if pkt.is_(bmf.VideoFrame): vf = pkt.get(bmf.VideoFrame) rgb = mp.PixelInfo(mp.kPF_RGB24) np_vf = vf.reformat(rgb).frame().plane(0).numpy() # we can add some more processing here, e.g. predicting print("frame", i, "shape", np_vf.shape) else: break Thanks you a lot. The demo you give can work. And I have another two question to ask. I wish to decode videos which is yuv420p format, and get the v channel of them. The code shown in the first block. The parameters threads can help the frame extract faster. With a lot of videos to decode, any other methods can help speed the decode process? with cpu only. video = graph.decode({ 'input_path': input_path, "log_level":"quiet", "dec_params": {"threads": "12"}, })['video'].start() # this will return a packet generator for i, pkt in enumerate(video): # convert frame to a nd array if pkt.is_(bmf.VideoFrame): vf = pkt.get(bmf.VideoFrame) np_vf = vf.frame().plane(2).numpy() elsee break What's more, when run the code, I find that a lot of info are output in the stdout, shown in the next block. the "log_level" only works with logs related to ffmpeg. How to make it quiet. I think it can also help speed. [2023-09-15 14:56:58.436] [info] c_ffmpeg_decoder c++ /root/mambaforge-pypy3/envs/BabitMF/lib/python3.7/site-packages/bmf/lib/libbuiltin_modules.so libbuiltin_modules.CFFDecoder [2023-09-15 14:56:58.437] [info] Module info c_ffmpeg_decoder c++ libbuiltin_modules.CFFDecoder /root/mambaforge-pypy3/envs/BabitMF/lib/python3.7/site-packages/bmf/lib/libbuiltin_modules.so [2023-09-15 14:56:58.441] [info] Constructing c++ module Input #0, mov,mp4,m4a,3gp,3g2,mj2, from './720p.mp4': Metadata: major_brand : isom minor_version : 512 compatible_brands: isomiso2avc1mp41 encoder : Lavf59.27.100 Duration: 00:01:00.03, start: 0.000000, bitrate: 2229 kb/s Stream #0:0(eng): Video: h264 (High) (avc1 / 0x31637661), yuv420p, 720x1280 [SAR 1:1 DAR 9:16], 2087 kb/s, 29.97 fps, 29.97 tbr, 30k tbn, 59.94 tbc (default) Metadata: handler_name : ?Mainconcept Video Media Handler encoder : Lavc59.37.100 libx264 Stream #0:1(eng): Audio: aac (LC) (mp4a / 0x6134706D), 48000 Hz, stereo, fltp, 133 kb/s (default) Metadata: handler_name : #Mainconcept MP4 Sound Media Handler [2023-09-15 14:56:58.461] [info] c++ module constructed [2023-09-15 14:56:58.461] [info] BMF Version: 0.0.8 [2023-09-15 14:56:58.461] [info] BMF Commit: af1b35a [2023-09-15 14:56:58.461] [info] start init graph [2023-09-15 14:56:58.461] [info] scheduler count1 debug queue size, node 0, queue size: 5 [2023-09-15 14:56:58.461] [info] node:c_ffmpeg_decoder 0 scheduler 0 [2023-09-15 14:57:01.763] [info] node id:0 decode flushing [2023-09-15 14:57:01.763] [info] node id:0 Process node end [2023-09-15 14:57:01.767] [info] node id:0 close node [2023-09-15 14:57:01.767] [info] node 0 close report, closed count: 1 [2023-09-15 14:57:01.767] [info] node id:-1 eof received [2023-09-15 14:57:01.768] [info] node id:-1 eof processed, remove node from scheduler [2023-09-15 14:57:01.768] [info] schedule queue 0 start to join thread [2023-09-15 14:57:01.768] [info] schedule queue 0 thread quit [2023-09-15 14:57:01.768] [info] schedule queue 0 closed [2023-09-15 14:57:01.768] [info] all scheduling threads were joint Thanks you a lot. The demo you give can work. And I have another two question to ask. I wish to decode videos which is yuv420p format, and get the v channel of them. The code shown in the first block. The parameters threads can help the frame extract faster. With a lot of videos to decode, any other methods can help speed the decode process? with cpu only. video = graph.decode({ 'input_path': input_path, "log_level":"quiet", "dec_params": {"threads": "12"}, })['video'].start() # this will return a packet generator for i, pkt in enumerate(video): # convert frame to a nd array if pkt.is_(bmf.VideoFrame): vf = pkt.get(bmf.VideoFrame) np_vf = vf.frame().plane(2).numpy() elsee break What's more, when run the code, I find that a lot of info are output in the stdout, shown in the next block. the "log_level" only works with logs related to ffmpeg. How to make it quiet. I think it can also help speed. [2023-09-15 14:56:58.436] [info] c_ffmpeg_decoder c++ /root/mambaforge-pypy3/envs/BabitMF/lib/python3.7/site-packages/bmf/lib/libbuiltin_modules.so libbuiltin_modules.CFFDecoder [2023-09-15 14:56:58.437] [info] Module info c_ffmpeg_decoder c++ libbuiltin_modules.CFFDecoder /root/mambaforge-pypy3/envs/BabitMF/lib/python3.7/site-packages/bmf/lib/libbuiltin_modules.so [2023-09-15 14:56:58.441] [info] Constructing c++ module Input #0, mov,mp4,m4a,3gp,3g2,mj2, from './720p.mp4': Metadata: major_brand : isom minor_version : 512 compatible_brands: isomiso2avc1mp41 encoder : Lavf59.27.100 Duration: 00:01:00.03, start: 0.000000, bitrate: 2229 kb/s Stream #0:0(eng): Video: h264 (High) (avc1 / 0x31637661), yuv420p, 720x1280 [SAR 1:1 DAR 9:16], 2087 kb/s, 29.97 fps, 29.97 tbr, 30k tbn, 59.94 tbc (default) Metadata: handler_name : ?Mainconcept Video Media Handler encoder : Lavc59.37.100 libx264 Stream #0:1(eng): Audio: aac (LC) (mp4a / 0x6134706D), 48000 Hz, stereo, fltp, 133 kb/s (default) Metadata: handler_name : #Mainconcept MP4 Sound Media Handler [2023-09-15 14:56:58.461] [info] c++ module constructed [2023-09-15 14:56:58.461] [info] BMF Version: 0.0.8 [2023-09-15 14:56:58.461] [info] BMF Commit: af1b35a [2023-09-15 14:56:58.461] [info] start init graph [2023-09-15 14:56:58.461] [info] scheduler count1 debug queue size, node 0, queue size: 5 [2023-09-15 14:56:58.461] [info] node:c_ffmpeg_decoder 0 scheduler 0 [2023-09-15 14:57:01.763] [info] node id:0 decode flushing [2023-09-15 14:57:01.763] [info] node id:0 Process node end [2023-09-15 14:57:01.767] [info] node id:0 close node [2023-09-15 14:57:01.767] [info] node 0 close report, closed count: 1 [2023-09-15 14:57:01.767] [info] node id:-1 eof received [2023-09-15 14:57:01.768] [info] node id:-1 eof processed, remove node from scheduler [2023-09-15 14:57:01.768] [info] schedule queue 0 start to join thread [2023-09-15 14:57:01.768] [info] schedule queue 0 thread quit [2023-09-15 14:57:01.768] [info] schedule queue 0 closed [2023-09-15 14:57:01.768] [info] all scheduling threads were joint if you want to speed up the decoding step, you can use cuda to accelerate. Please refer to document the log_level is used to pass log parameter into ffmpeg, you can set environment variable to disable the BMF logs, please refer to document
2025-04-01T04:10:12.920172
2018-05-24T06:40:01
325983403
{ "authors": [ "VarshaKamble159", "deltakosh" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:13591", "repo": "BabylonJS/Babylon.js", "url": "https://github.com/BabylonJS/Babylon.js/issues/4373" }
gharchive/issue
Unable to get mesh on click event Hi @deltakosh, I am trying to get mesh on which I am clicking. For some models, it works fine. But for the attached model it is not getting detected. It returns undefined for the clicked mesh. Attaching Babylon model for reference. ge_model.zip Following is the log of pickedMesh got from this model click But it works fine for the following model cube.zip and the log of picked mesh for this model is: Please suggest the same. Thanks, Regards, Varsha Hello can you save the model somewhere and create a repro in the playground? Also please post the question on the forum so people will be able to help you quickly :) http://www.html5gamedevs.com/forum/16-babylonjs/
2025-04-01T04:10:12.925305
2018-12-18T23:13:38
392375510
{ "authors": [ "TrevorDev", "deltakosh" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:13592", "repo": "BabylonJS/Babylon.js", "url": "https://github.com/BabylonJS/Babylon.js/issues/5671" }
gharchive/issue
Turn useInvalidateRectOptimization on by default We cannot still turn the optimization on by default because some issues remain: Few clipping issues on https://www.babylonjs-playground.com/#3VMTI9#24 Rotation issues on this PG: https://www.babylonjs-playground.com/index.html#XCPP9Y#821 (the red rectangle disapears at some angle) For the second issue I think I know why. We need to change the intersectsRect function to make sure we first transform the currentMeasure using this._transformMatrix (The same way we transform the invalidatedRect), then deduce the AABB and finally compare to invalidated rect. Indeed, we need to make sure they are both using transformed coordinates (and thus I think we could get rid of the clearRect and the invalidatedRect and just keep the clearRect for both clearing and clipping) And to be honest I think it is also the same issue for the _clip function: we cannot merge the invalidatedRect with the currentMeasure to determine the clipping rect. This time we need to do the opposite. Getting the invalidatedRect to local coordinates by transforming it with this._invertTransformMatrix and then merging it with currentMeasure If that works we will definitely need to validate performance because these operations won't be free @deltakosh did https://github.com/BabylonJS/Babylon.js/commit/63e3b8dd6fac1c29274f28f7ce532f4879020da3 restore the original functionality or do you need me to take a look at that? Also what clipping issues did you see in the first example? My commit fixed some points (I sent you an email:)) but some issues remain On the first one I saw clipping issue on the rotating part of the left and on the dash line in the middle part Adding https://forum.babylonjs.com/t/gui-not-refreshing-properly-pg-attached/541 as an issue I will fix in this
2025-04-01T04:10:12.927690
2019-11-26T00:21:51
528413408
{ "authors": [ "PirateJC", "sebavan" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:13593", "repo": "BabylonJS/Babylon.js", "url": "https://github.com/BabylonJS/Babylon.js/issues/7209" }
gharchive/issue
Projection Texture Caching Issue Looks like we've got a very weird caching issue/bug with the Projection Texture. https://playground.babylonjs.com/#KIMW2P#3 In this playground...we should see the projected texture from the spotlight on the groundplane AND the sphere. If you open the inspector and hide the sphere...the ground will inherit the projection texture. Then if you unhide the sphere, now both objects will correctly display the projection texture as expected. Seems like there's some issue where the sphere rendering is blocking the groundplane from getting the projection texture on the initial load? indeed, on the ground, the texture being used is the wrong one: I ll have a look ASAP.
2025-04-01T04:10:12.929099
2017-10-13T15:24:42
265328033
{ "authors": [ "deltakosh", "noalak" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:13594", "repo": "BabylonJS/Babylon.js", "url": "https://github.com/BabylonJS/Babylon.js/pull/2944" }
gharchive/pull-request
glTF Exporter > skins + samples Export gltf skins and bones Add FBX samples and their resulting exported files (babylon + glTF) All models are licence free? Yes, some models comes from khronos group glTF samples for recurrence other from our infographist. Sounds good
2025-04-01T04:10:12.951916
2020-04-18T16:53:04
602502973
{ "authors": [ "BaderEddineOuaich", "wfybiz" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:13595", "repo": "BaderEddineOuaich/time_ago_provider", "url": "https://github.com/BaderEddineOuaich/time_ago_provider/issues/4" }
gharchive/issue
Seconds option would have been better Need a seconds option also. Now it starts with minute Added fromNow in v2.0.0
2025-04-01T04:10:12.965025
2023-04-13T07:10:15
1665851999
{ "authors": [ "Aahilbhai648", "cuiaiyu", "shrinivasait" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:13596", "repo": "BadourAlBahar/pose-with-style", "url": "https://github.com/BadourAlBahar/pose-with-style/issues/29" }
gharchive/issue
IndexError: index 24 is out of bounds for axis 0 with size 24 Hi, I was testing the garment transfer.py code, But it's showing the following error. while I followed every step. Please Guide through. Error:: Traceback (most recent call last): File "/content/drive/MyDrive/pose-with-style/garment_transfer.py", line 196, in generate(args, g_ema, device, mean_latent) File "/content/drive/MyDrive/pose-with-style/garment_transfer.py", line 58, in generate input_uv_coor, input_uv_mask, input_uv_symm_mask = getSymXYcoordinates(input_pose, resolution = 512) File "/content/drive/MyDrive/pose-with-style/util/dp2coor.py", line 9, in getSymXYcoordinates xy, xyMask = getXYcoor(iuv, resolution = resolution) File "/content/drive/MyDrive/pose-with-style/util/dp2coor.py", line 35, in getXYcoor x, y, u, v = mapper(iuv, resolution) File "/content/drive/MyDrive/pose-with-style/util/dp2coor.py", line 73, in mapper uv_smpl = dp_uv_lookup_256_np[ IndexError: index 24 is out of bounds for axis 0 with size 24 This is probably because you load the densepose as VUI rather than IUV (so a RGB or BGR image loading/saving problem.) Thank you for taking the time to answer my query. I have resolved the issue. @shrinivasait @cuiaiyu @BadourAlBahar @fyviezhao Could you please guide me how to fix the below issue. I encountered while trying Repose - Upper Body /content/PWS Traceback (most recent call last): File "/content/PWS/inference.py", line 144, in generate(args, g_ema, device, mean_latent) File "/content/PWS/inference.py", line 48, in generate input_pose = np.array(Image.open(os.path.join(path, input_name+'_iuv.png'))) File "/usr/local/lib/python3.10/dist-packages/PIL/Image.py", line 3227, in open fp = builtins.open(filename, "rb") FileNotFoundError: [Errno 2] No such file or directory: './test_data/source_iuv.png'
2025-04-01T04:10:12.990278
2024-10-17T19:47:31
2595599997
{ "authors": [ "BamaCharanChhandogi", "MonilMehta" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:13597", "repo": "BamaCharanChhandogi/READMEasy", "url": "https://github.com/BamaCharanChhandogi/READMEasy/pull/88" }
gharchive/pull-request
Issue #82 : Add Stack Selection and Pre-Fill Options with Enhanced UI in InputForm Component Added new stack selection and pre-fill options in the InputForm component, along with significant UI improvements for better user experience. Changes: Stack Selection: Implemented a dropdown for users to select from predefined project stacks (Python, MERN, Django, React). Dynamically updates the form based on the selected stack. Pre-Fill Feature: Added a "Pre-fill" button that populates the form fields (name, description, features, etc.) with template data for the selected stack. Allows users to quickly start with pre-defined content and edit as needed. UI Enhancements: Styled the stack selection dropdown with a modern look, including focus rings, hover effects, and smooth transitions. Integrated responsive design and accessibility improvements, such as enhanced focus states and placeholder text styling. Hi @MonilMehta, thanks for raising this. but I think when the user uses the prefill button, only the installation part remains the same user doesn't need all section sample codes because the user will enter details their project details. Alright, if you have any improvement in mind, I'll be happy to work on it. Regards, Monil Mehta On Fri, 18 Oct, 2024, 10:36 am Bama Charan Chhandogi, < @.***> wrote: Hi @MonilMehta https://github.com/MonilMehta, thanks for raising this. but I think when the user uses the prefill button, only the installation part remains the same user doesn't need all section sample codes because the user will enter details their project details. — Reply to this email directly, view it on GitHub https://github.com/BamaCharanChhandogi/READMEasy/pull/88#issuecomment-2421377338, or unsubscribe https://github.com/notifications/unsubscribe-auth/A4YU5NLICBTBA3GB7IBIQFDZ4CJODAVCNFSM6AAAAABQEPFAN2VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDIMRRGM3TOMZTHA . You are receiving this because you were mentioned.Message ID: @.***> Alright thanks, will do that and submit another PR. Regards, Monil Mehta On Fri, 18 Oct, 2024, 11:08 pm Bama Charan Chhandogi, < @.***> wrote: you can keep the prefill option on only the Installation section. — Reply to this email directly, view it on GitHub https://github.com/BamaCharanChhandogi/READMEasy/pull/88#issuecomment-2422944062, or unsubscribe https://github.com/notifications/unsubscribe-auth/A4YU5NKLKHVLPPLUI67EZM3Z4FBQDAVCNFSM6AAAAABQEPFAN2VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDIMRSHE2DIMBWGI . You are receiving this because you were mentioned.Message ID: @.***> As per your suggestion, added Pre-fill option for installation and contributions. My bad, its rendering correctly on my device, I'll look into it Changed text color, please check again.
2025-04-01T04:10:12.992092
2015-07-04T20:14:57
93043849
{ "authors": [ "MeMyselfMC", "UCAleksei", "confuser" ], "license": "mit", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:13598", "repo": "BanManagement/BanManager-WebUI", "url": "https://github.com/BanManagement/BanManager-WebUI/issues/68" }
gharchive/issue
[Request] Global Punishments Support Hello, Would it be possible to add support for global (external) bans and mutes to the Web UI? Regards, John Would like this as well. Could anyone clarify what kind of support they'd like for this? The external system is merely used internally to sync to local connection storage, therefore showing this data publicly could be a little confusing as it'd be duplicated across multiple servers in the interface.
2025-04-01T04:10:13.020954
2022-10-19T10:02:59
1414677121
{ "authors": [ "gbernaldo82" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:13599", "repo": "BarGabriel/SipServer", "url": "https://github.com/BarGabriel/SipServer/issues/4" }
gharchive/issue
Status of the project Hi, good morning! I've seen the source code of this project and I wonder if you are still working on it? I wonder that cause I think this project can help me with some projects I have in mind (not comercial projects I mean, just university projects). I would like to ask you too if there is support to audio (RTP) or not? Thank you so much! Ok, Thank you so much for you answer.
2025-04-01T04:10:13.042874
2016-09-07T13:59:03
175509872
{ "authors": [ "coveralls", "rianquinn" ], "license": "mit", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:13608", "repo": "Bareflank/hypervisor", "url": "https://github.com/Bareflank/hypervisor/pull/189" }
gharchive/pull-request
Remove Redundant Null Pointer Checks Remove uses of == nullptr and != nullptr Signed-off-by: “Rian<EMAIL_ADDRESS> Coverage remained the same at 100.0% when pulling ff7882dbc9b6432268f64eef6730150b9bb15aa6 on rianquinn:remove_nullptr_checks into 82f447aa3bfcf419c4c3222278c7517e9e8bc8b1 on Bareflank:master. The readability checks are failing as they want explicit == and != nullptr. Furthermore, the C++ Core Guidelines use this same syntax.
2025-04-01T04:10:13.055598
2023-10-31T15:03:00
1970672810
{ "authors": [ "Baroshem", "vejja" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:13610", "repo": "Baroshem/nuxt-security", "url": "https://github.com/Baroshem/nuxt-security/pull/274" }
gharchive/pull-request
feat(csp): add hashStyles option for SSG Types of changes [ ] Bug fix (a non-breaking change which fixes an issue) [x] New feature (a non-breaking change which adds functionality) [ ] Breaking change (fix or feature that would cause existing functionality to change) Description This PR adds an ssg: hashStyles option in the ModuleOptions interface. The background for this option is that CSP hashes do not carry the exact same meaning for <script> and for <style> tags. For the script-src policy, when 'strict-dynamic' is enabled, new <scripts> inserted by hashed scripts will be allowed For the style-src policy, there is no 'strict-dynamic' option, and therefore new <styles> inserted by hashed scripts will be disallowed. As a result, a user who is not in control of how scripts dynamically insert styles, may typically want to introduce script hashes but exclude style hashes. Checklist: [ ] My change requires a change to the documentation. [ ] I have updated the documentation accordingly. [ ] I have added tests to cover my changes (if not applicable, please state why) Hi @vejja If you dont mind. I would like to release a new RC version today without this feature. We have gathered a lot of features and fixes already and I would like to publish them as separate versions. So for this, I would love to have it in rc.4 :) Hi @vejja If you dont mind. I would like to release a new RC version today without this feature. We have gathered a lot of features and fixes already and I would like to publish them as separate versions. So for this, I would love to have it in rc.4 :) Yes, makes sense BTW I'd like to propose some changes to the options interface (in relation to CSP only) I know this is a sensitive point - what do you think and what's the good place to potentially discuss this ? @vejja I think seperate issue with feature request label would be a good place :) @vejja For some reason this Pr was considered as merged even though I have not merged it. I think it has to do with the commit history. Could you please prepare a new PR in the next days with this functionality? We will make it for 1.0.0-rc.4 :)
2025-04-01T04:10:13.059673
2023-11-19T15:52:19
2000910372
{ "authors": [ "MennoSt" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:13611", "repo": "BartSte/fzf-help", "url": "https://github.com/BartSte/fzf-help/issues/22" }
gharchive/issue
bat command is not working if the linux distribution installs executable as batcat Info: platform: Ubuntu 22.04 version: 2.0.1 Issue: For some distributions like ubuntu 22.04 the bat executable is installed as batcat instead of bat. Therefore we get a bat execution error. Solution It can be solved by making a simlink. But it would be better if the script supports bat and batcat instead. Fixed by https://github.com/BartSte/fzf-help/pull/23
2025-04-01T04:10:13.095188
2019-09-03T05:55:13
488405668
{ "authors": [ "Apollo108", "ChiefMax", "RakhimyanTim", "bijoya-banik", "mvanbeusekom", "pr0thean" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:13612", "repo": "Baseflow/flutter-geolocator", "url": "https://github.com/Baseflow/flutter-geolocator/issues/321" }
gharchive/issue
PlatformException (PlatformException(ERROR_GEOCODING_COORDINATES, Service not Available, null)) 💬 Questions and Help For questions or help we recommend checking: The Flutter tag in Stack Overflow I want to convert latitude and longitude to formatted address. But don't know why it shows me this error now. Same same here... Hello @bijoya-banik, @RakhimyanTim, @pr0thean. I have tested this with our company's address and it gave me the correct response. I used this: List<Placemark> placemark = await Geolocator().placemarkFromCoordinates(52.221539, 6.893662); Please make sure to see if the Latitude and Longitude are properly filled in and that you have given the app permission to use the GPS. @bijoya-banik, @RakhimyanTim, @pr0thean, the "Service not Available" error indicates that an IOException occurred when executing the native call (see Android docs). Usually this means you don't have internet which is required to translate the longitude / latitude into a physical address. @mvanbeusekom this issue doesn't reproduce on the emulator in the debug mode, but it's still there in the release apk on my Android device, and I have good internet connection. Please, help with this... Hello @Apollo108, it is very well possible that the device doesn't have Play Services installed or enabled (maybe blocked by goverment firewalls). The geocoding feature relies on the native Android implementation which means it needs Play Services to be available. @mvanbeusekom checked with https://pub.dev/packages/google_api_availability plugin, seems Play Services are OK, and the issue is still here... What am I doing wrong? @Apollo108 as mentioned earlier, the "Service not Available" indicates that an IOException occurred during the translation of the coordinates. According to the underlying Android SDK we are using this means "IOException is thrown if the network is unavailable or any other I/O problem occurs". This is not really something I can help you fix. More information are in the Android docs.
2025-04-01T04:10:13.106489
2020-03-19T16:42:44
584548643
{ "authors": [ "davidpfarrell", "nwinkler" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:13613", "repo": "Bash-it/bash-it", "url": "https://github.com/Bash-it/bash-it/issues/1521" }
gharchive/issue
Making prompt segments like ruby_version_prompt optional Greetings All, TIL it appears that, in general, one cannot disable the ruby_version_prompt. I recently reviewed #1520 which, in their first version of the PR, had completely removed the ruby_versoin_prompt segment from the theme. I thought it was odd, but now I realize they likely did it because there was no other clear way to disable the prompt segment. Up until now, I kinda assumed it only showed if the user was in a ruby source directory of some kind (which is how some of the other segments work), but it appears that, if the prompt is used, and if Ruby is installed, it will always show the ruby version in the prompt text. I must say that sounds a bit annoying. A quick search suggests this is accepted behavior across all themes that use the prompt. In general, these themes don't "build up" their prompt on the idea of configurable segments (like say Powerline), so there are no theme-specific ways of excluding these types of segments. Wondering if something like the following might be useful: function ruby_version_prompt { if [ -z "${RUBY_VERSION_PROMPT_DISABLED}" ]; then echo -e "$(rbfu_version_prompt)$(rbenv_version_prompt)$(rvm_version_prompt)$(chruby_version_prompt)" fi } If we think this approach is useful, we could probably convince @shiguol to write up a PR since they were keen to disable it in their theme :) -DF cc: @shiguol I like this idea - let's do this! This was closed by #1520 - Nice work team !
2025-04-01T04:10:13.111000
2024-08-05T14:48:46
2448749033
{ "authors": [ "bt-platform-eng", "kevinperaza" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:13614", "repo": "Basis-Theory/developers.basistheory.com", "url": "https://github.com/Basis-Theory/developers.basistheory.com/pull/427" }
gharchive/pull-request
chore: upgrade deps Description chore: upgrade deps Testing required outside of automated testing? [x] Not Applicable Screenshots (if appropriate): [x] Not Applicable Rollback / Rollforward Procedure [ ] Roll Forward [x] Roll Back Reviewer Checklist [ ] Description of Change [ ] Description of outside testing if applicable. [ ] Description of Roll Forward / Backward Procedure [ ] Documentation updated for Change :tada: This issue has been resolved in version 1.169.0 :tada: The release is available on GitHub release Your semantic-release bot :package::rocket:
2025-04-01T04:10:13.112965
2019-01-31T16:58:41
405344121
{ "authors": [ "Worien", "ibc" ], "license": "mit", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:13615", "repo": "BasqueVoIPMafia/cordova-plugin-iosrtc", "url": "https://github.com/BasqueVoIPMafia/cordova-plugin-iosrtc/issues/355" }
gharchive/issue
"js/adapter-latest.js" is unavailable From README: In case you'd like to expose the API in the global namespace like regular browsers you can do the following: // Just for Cordova apps. document.addEventListener('deviceready', function () { // Just for iOS devices. if (window.device.platform === 'iOS') { cordova.plugins.iosrtc.registerGlobals(); // load adapter.js var script = document.createElement("script"); script.type = "text/javascript"; script.src = "js/adapter-latest.js"; script.async = false; document.getElementsByTagName("head")[0].appendChild(script); } }); But when I'm trying to use this code I see that there are no files like adapter-latest.js and that's why webrtc is not working in cordova app. But adapter should be included in the webrtc ios cordova plugin? It is webrtc related class. Am I wrong? No, It shouldn't.
2025-04-01T04:10:13.199780
2020-10-28T07:53:09
731197293
{ "authors": [ "kazaryan-t" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:13616", "repo": "BeamMW/beam-ui", "url": "https://github.com/BeamMW/beam-ui/issues/239" }
gharchive/issue
Transactions list - cannot delete transactions from the list of transactions. Transactions are not deleted from the list of transactions when we click the three dots button and then the delete button. Steps to reproduce: Open the Transaction list in the wallet. Click the three dots button next to any completed transaction. Click Delete. Actual result - transaction is not deleted. Expected result - transaction is deleted from the list of transactions. Checked on master 5.2.9723.3301
2025-04-01T04:10:13.236725
2024-12-24T02:30:19
2757059094
{ "authors": [ "BeanCheeseBurrito" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:13617", "repo": "BeanCheeseBurrito/Flecs.NET", "url": "https://github.com/BeanCheeseBurrito/Flecs.NET/issues/81" }
gharchive/issue
Replace code generators with C# 14 extensions The extension code generators should be replaced with the C# 14 extensions feature when it lands. This will get rid of about 400,000 lines of generated code and make the wrapper significantly easier to maintain and contribute to as we will only have to write extension methods once and have it be usable by many types. https://github.com/dotnet/csharplang/blob/main/proposals/extensions.md public struct Entity : IEntity { ... } public struct Component<T> : IEntity { ... } public struct System<T, ...> : IEntity { ... } public struct Observer<T, ...> : IEntity { ... } public static class EntityMethods { // All types that implement the IEntity interface will receive .Get and .Set methods. extension<T> (T entity) where T : IEntity { public ref TComponent Get<TComponent>() { ... } public ref T Set<TComponent>(TComponent value) { ... } } } Apparently this is already possible with the current C# instance extension syntax. Wasn't aware of this and will begin getting rid of most of the extension source generators. Current extension syntax doesn't work well with methods that have type parameters. public static class EntityMethods { public static TEntity Add<TEntity, T>(this TEntity self) where TEntity : IEntity { return self.Add(Type<T>.Id(self.World)); } } // Both type parameters must be passed. Entity e = world.Entity(); e.Add<Position>(); // Invalid e.Add<Entity, Position>(); // Valid Will have to wait until the new extensions feature in .NET 10.
2025-04-01T04:10:13.275261
2014-12-15T20:25:02
52034061
{ "authors": [ "garak", "mikeSimonson" ], "license": "mit", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:13618", "repo": "Bee-Lab/bowerphp", "url": "https://github.com/Bee-Lab/bowerphp/pull/92" }
gharchive/pull-request
Refactor and adding some tests Should I had the support for that type of urls ? $this->assertEquals('components/jqueryui', $clearGitURL->invokeArgs($this->repository, array('git@github.com:components/jqueryui.git'))); Sorry Mike, I forgot about this and meanwhile the code changed. Can you review your PR and see if it's still applicable? I am fixing it right now Done Sorry for the delay. Should be fixed now. Thank you!
2025-04-01T04:10:13.333963
2024-07-30T03:21:19
2436798411
{ "authors": [ "Ailrun", "HuStmpHrrr" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:13619", "repo": "Beluga-lang/McLTT", "url": "https://github.com/Beluga-lang/McLTT/issues/147" }
gharchive/issue
Document the source code related to #145 There is a particular way in which we should write the docs in the source files so that it generates more reasonable and readable outputs. This can wait, until we need to submit some artifact. consider https://github.com/coq-community/proviola and https://github.com/cpitclaudel/alectryon I think both have some issues. https://github.com/coq-community/proviola seems dead, and it is not even clear whether it is compatible with Coq 8.17.1. Also, there is no clear installation/execution doc. https://github.com/cpitclaudel/alectryon works fine, but the output has one (critical, IMO) issue. There is no "go-to-definition" links. 2. https://github.com/cpitclaudel/alectryon works fine, but the output has one (critical, IMO) issue. There is no "go-to-definition" links. what? that's deadly! really sad that this basic stuff is not working. ok, make sense to me. I have a template, if you want to save some time, this template works: https://hustmphrrr.github.io/popl20-artifact/ otherwise, I am also open to other options. I think we can first change all (* to (** I think we should use a different star as the coqdoc renders something less expected: https://beluga-lang.github.io/McLTT/toc.html @HuStmpHrrr We can move them to h3. But I hate how the format ignores a basic recommendation for html: "Do not use multiple h1s". That formatting assumes the use of multiple h1s as items.
2025-04-01T04:10:13.335424
2021-12-04T13:06:58
1071204046
{ "authors": [ "Ben-Padbury" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:13620", "repo": "Ben-Padbury/api_authorisation_token_scraper", "url": "https://github.com/Ben-Padbury/api_authorisation_token_scraper/issues/13" }
gharchive/issue
Namespaces aren't recognised. Within the 'src/api_authorisation_token/api/apple/music.py' class file we are unable to import the Scraper class without the IDE complaining the the 'api_authorisation_token' module cannot be found. I suspect the way we set up the environment and modules is incorrect. Closed with #14
2025-04-01T04:10:13.341502
2024-02-01T09:14:41
2111969784
{ "authors": [ "BenPru", "ChrisMisker", "Crashman1983", "Kars-de-Jong" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:13621", "repo": "BenPru/luxtronik", "url": "https://github.com/BenPru/luxtronik/issues/235" }
gharchive/issue
Heat amount decreasing, but state_class is total_increasing The sensor heat_amount_heating has a state_class of "total_increasing", but sometimes I see a decrease in the amount of heat generated. That's strange, isn't it? Seems like that shouldn't be possible. Please connect an USB-Stick to your heatpump and activate the data logger. Then look into it and validate that the peaks are not in the data from the Luxtronik data logger. Another strange thing happened last night: I have the datalogger downlad, but no idea what to do with it. Can you point me in the right direction? You should find three files: DTA+CSV+ERR The CSV can you open with Excel. See also: https://www.youtube.com/watch?v=9cUT0Vh4x2Y https://www.haustechnikdialog.de/Forum/t/174914/Luxtronik-II-Datenlogging Are you sure your heat pump controller hasn't restarted? I have seen several people on a german forum report crashes with version 3.89.x of the firmware. With the current firmware (don't know if that was the case before) the Heatpump seams to "correct itself". Another situation where a decrease can be seen, is during deicing (Abtauen). Seems to be normal. My Luxtronik is crashing is well and sometimes it destroys my statistics.
2025-04-01T04:10:13.352298
2020-10-25T20:29:59
729106744
{ "authors": [ "BenedicteGiraud", "a-eid" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:13622", "repo": "BenedicteGiraud/react-sliding-side-panel", "url": "https://github.com/BenedicteGiraud/react-sliding-side-panel/issues/22" }
gharchive/issue
does not work with nextjs. the way this library uses css, is not supported by nextjs, it's probably better to have the user auto import the library css manually ... Hello @a-eid, The version 2.0.3 is working with nextjs as need to include the css file yourself. You have an example here: https://github.com/BenedicteGiraud/react-sliding-side-panel/tree/master/examples/example3-nextjs
2025-04-01T04:10:13.398597
2017-05-22T19:34:40
230496067
{ "authors": [ "BerndWessels", "whatisupalldown" ], "license": "mit", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:13623", "repo": "BerndWessels/react-webpack", "url": "https://github.com/BerndWessels/react-webpack/issues/2" }
gharchive/issue
Got blank page I followed the instruction and got a blank page for http://localhost:3000. This sample needs upgrade to Relay Modern to work. Hi, sorry yes - this is pretty out of date now. I will put a warning about that on the readme to make that clear. Cheers Bernd
2025-04-01T04:10:13.408067
2022-01-22T08:13:54
1111334791
{ "authors": [ "Dineen94", "HeptaSean" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:13624", "repo": "Berry-Pool/nami-wallet", "url": "https://github.com/Berry-Pool/nami-wallet/issues/282" }
gharchive/issue
Missing ADA Hi guys, Please can you help me with an issue on Nami? It appears in my wallet that i have 301 ADA in my wallet when I should have 3191 ADA. In transaction ID 30fb2777a6816566798628992441e5a26c68ab92f1e69e62f865fb480dbafaf8, I sent 2000 ADA to my other sub-wallet in Nami but it appears that another 2869 ADA has gone. It shows in my "http://pool.pm" that the ADA is still in my wallet but I am unable to see it in Nami? (https://pool.pm/addr1q9szxdf0neuz8zvhsxyuuyj63r4y08jvfwtph57a937l2s7meczvtpxskk3hg2ra8r3ak546g2rc3gfa0xqhe03fl9hsk65m9g) Please help :) You probably have done transactions with another wallet app than Nami. All other wallet apps use a lot of addresses per wallet, while Nami only sees and uses the first one. Transactions in other wallet apps move things away from where Nami can see them. But it should show the real balance in a pop-up at the small “i” next to the balance. You can move everything back to that first address (the one Nami shows in “Receive”) with a transaction in the other wallet app. ccvault has a “Single address mode” in the settings to prevent this moving away and cooperate nicely with Nami. (Also try “Collateral” on the same page if you set a collateral in Nami. But do not set it in ccvault. ccvault will recognise the collateral UTxO done by Nami, but Nami will not if ccvault does it. Both knowing it should prevent ccvault from using the collateral UTxO for normal transactions. Otherwise, Nami would prompt to set it again and again costing transaction fees.) Yes! this sorted it, sent all ADA to my OG address and it's fixed, thank you! From: Benjamin Braatz @.> Sent: 22 January 2022 22:55 To: Berry-Pool/nami-wallet @.> Cc: Dineen94 @.>; Author @.> Subject: Re: [Berry-Pool/nami-wallet] Missing ADA (Issue #282) You probably have done transactions with another wallet app than Nami. All other wallet apps use a lot of addresses per wallet, while Nami only sees and uses the first one. Transactions in other wallet apps move things away from where Nami can see them. But it should show the real balance in a pop-up at the small “i” next to the balance. You can move everything back to that first address (the one Nami shows in “Receive”) with a transaction in the other wallet app. ccvault has a “Single address mode” in the settings to prevent this moving away and cooperate nicely with Nami. (Also try “Collateral” on the same page if you set a collateral in Nami. But do not set it in ccvault. ccvault will recognise the collateral UTxO done by Nami, but Nami will not if ccvault does it. Both knowing it should prevent ccvault from using the collateral UTxO for normal transactions. Otherwise, Nami would prompt to set it again and again costing transaction fees.) — Reply to this email directly, view it on GitHubhttps://github.com/Berry-Pool/nami-wallet/issues/282#issuecomment-1019372477, or unsubscribehttps://github.com/notifications/unsubscribe-auth/AXNITDRCW6SVR3HSYCXXJNLUXMYXDANCNFSM5MRTMBAA. Triage notifications on the go with GitHub Mobile for iOShttps://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675 or Androidhttps://play.google.com/store/apps/details?id=com.github.android&referrer=utm_campaign%3Dnotification-email%26utm_medium%3Demail%26utm_source%3Dgithub. You are receiving this because you authored the thread.Message ID: @.***> That's good to hear! Would you close the issue, so that the Nami developers have less to go through, when they return here?
2025-04-01T04:10:13.441288
2023-10-26T18:51:51
1964174280
{ "authors": [ "ZenMasta", "fowlis", "rauenzi" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:13625", "repo": "BetterDiscord/BetterDiscord", "url": "https://github.com/BetterDiscord/BetterDiscord/issues/1684" }
gharchive/issue
[Feature] Enable/disable all plugins one toggle Before Requesting [X] I found no existing issue matching my feature request Describe the feature you'd like! With the frequency of discord breaking stuff. It would be nice if after repair installing BD, there was a toggle to enable all plugins instead of doing them one at a time. Anything else? No response It's intended behaviour with the repair option to disable all your plugins. Run the install option if you want to keep them enabled. Duplicate of #723
2025-04-01T04:10:13.448796
2024-11-07T05:53:01
2639965097
{ "authors": [ "batnom", "cngarrison" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:13626", "repo": "Beyond-Better/bb", "url": "https://github.com/Beyond-Better/bb/issues/23" }
gharchive/issue
New Conversation button doesn't always work I have found that hitting the New Conversation button doesn't always create a new conversation. I found myself doing a manual bb stop and bb start to work around this whenever I needed a new conversation. It doesn't look like there are any errors logged to .bb/api.log when this occurs, but I didn't check the browser errors apologies. Will append further information such as browser console logs the next time I experience this. Experienced on v0.2.2 This is a known bug; it will be fixed in the upcoming changes to browser interface. In the interim, doing a page refresh in the browser will start a new conversation. The updated BUI has a working button for New Conversation.
2025-04-01T04:10:13.469602
2020-02-04T17:40:53
559876256
{ "authors": [ "OrkhanAlikhanov", "chrigu1981" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:13627", "repo": "BiAtoms/Socket.swift", "url": "https://github.com/BiAtoms/Socket.swift/issues/16" }
gharchive/issue
Ubuntu 16.04 does not build i get a warning if i try to build my project warning: you may be able to install libtls using your system-packager: apt-get install libressl installing apt-get install libressl does not find the package errors on building .build/checkouts/CLibreSSL/clibressl.h:4:10: error: 'tls.h' file not found .build/checkouts/Socket.swift/Sources/TLS.swift:12:12: error: could not build C module 'CLibreSSL' Hey! Check out #12
2025-04-01T04:10:13.555215
2017-09-19T12:42:59
258806510
{ "authors": [ "AlasdairGray", "guicalman", "ljgarcia" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:13628", "repo": "BioSchemas/map2model", "url": "https://github.com/BioSchemas/map2model/issues/27" }
gharchive/issue
What about CV column? I do not see anything on the CV column, even when mappings do have something there. Will it be included? I would say CV is very important in Bioschemas, even more now with the changes on PhysicalEntity and Record. Cheers, Very good question! This question is related with mapping generation and partially to map2model, but I give a better description here. If you access spec_process_detail.md you find the step-by-step process of how to fill the specification mapping, there is the following explanation. Maybe this was not so clear due to the building state of this file. CV means Controlled Vocabulary, this field contains a list term/ontology. A term is just a text that represent the value expected and an ontology is a namespace that contains ontology name colon (:) the ontology URL. Examples: Term list term1, term2, ter3 Ontology list onto1: http://onto1.com, onto2: http: //onto2.com, onto3: http://onto1.com Mixed list onto1: http://onto1.com, term1, onto2: http: //onto2.com, term2, term3, onto3: http://onto1.com In the current version of PhysicalEntity you can see how the text of CV for additionalType is parsed. Let me know if you have any additional question, or I can mark this issue as closed. This way would probably work for profiles but not necessarily for types. For instance, CV are recommended for the additionalType but which ontologies and which terms is up to the profiles, not the types. Also, CV are recommended for the additionalProperty property but which ontologies and terms depends on the property/relation that you want to describe. Thanks @ljgarcia for your feedback, but again, this is a mapping concern and not a map2model issue. For the community's sake, here is what I'd found about your question. From my stand point, this is a conceptual definition misunderstanding. Reviewing the Bioschemas Agreement Meeting's memories there is no precise definition of what a Type is and what a Profile is. The concept Profile is only mentioned in two cases: "Start with the protein profile using biologicalEntity" . Considering Protein and BiologicalEntity terms are mentioned, this could be your proposal. "Alasdair made the analogy to the HCLS community profile (see image below) where the data repository is capturing the Summary Level description and the dataset is capturing the version level description." I copy @rajido and @AlasdairGray to check this discussion, maybe they have some ideas to help us clarifying this topic. Could you please describe what do you have in mind as a Type and what as a Profile? With this concepts Bioschemas mapping spreadsheet can be changed to fill your needs. If my documentation's digging was complete, this could be a topic to be considered in the next Bioschemas meeting I believe that we are now distinguishing between types that need to be added to Schema.org and profiles that are layered over them. Bioschemas specifications represent the profiles and this is what the map2model tool generates. For types we should follow the standard Schema.org representation. (I think a lot of confusion comes because our profiles very often have the same name as the type, e.g. Bioschemas Dataset is a profile over the Schema.org Dataset type.) For types there is no notion of MG, CD, or CV. Instead, a type should have a place in the Schema.org hierarchy showing what they inherit and then a set of properties that they add. These properties will have a name, expected type, and a description. We should follow the same layout as the Schema.org pages, e.g. schema:Dataset. This layout allows a user to quickly find the properties that are most specific to that type, but do not necessarily support the marking up of resources according to a community agreed profile. For profiles the display should be supporting the use case of users marking up resources according to a community agreed set of properties. In my opinion we should have the mandatory properties grouped together, then the recommended, and finally the optional properties, as was done with the Bioschemas Event Specification. As these are community profiles it is appropriate to include the MG, CD, and CV columns and to have recommendations of the particular vocabularies to use.
2025-04-01T04:10:13.616451
2024-01-27T05:44:39
2103248151
{ "authors": [ "Bionus", "ProtagNeptune" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:13629", "repo": "Bionus/imgbrd-grabber", "url": "https://github.com/Bionus/imgbrd-grabber/issues/3092" }
gharchive/issue
Remux WebM to MP4 Maybe add the option "Remux WebM to MP4" in the Filename below "Replace JPEG by JPG"? https://stackoverflow.com/questions/18123376/webm-to-mp4-conversion-using-ffmpeg/60443156#60443156 This will also solve ExifTool being incompatible with WebM for adding metadata to the downloaded videos in Grabber. Just pushed a commit to implement this. The setting is in "Save > Format conversion" and requires ffmpeg to be present somewhere in the PATH where Grabber can find it. Note that it has one drawback however: the files will be properly detected if you use a MD5 list, as the remuxed path will be stored in the MD5 database, but they won't be found without it. Because if you use %ext% to check if the image exists on the disk, it will search for "webm" files, but those won't exist. Working around this would be a quite extensive change, so for now I believe it should be fine. I'm wondering if this happens before applying metadata. So that the ExifTool can apply the metadata to the remuxed mp4? Note that it has one drawback however: the files will be properly detected if you use a MD5 list, as the remuxed path will be stored in the MD5 database, but they won't be found without it. Because if you use %ext% to check if the image exists on the disk, it will search for "webm" files, but those won't exist. Working around this would be a quite extensive change, so for now I believe it should be fine. Maybe add a flag in the MD5 database that this file was "converted" to "mp4"? So that when searching for it, it finds the "webm" files with the "converted" flag set to "mp4" so it checks if there's an mp4 of that file.
2025-04-01T04:10:13.643100
2019-02-07T16:42:59
407792915
{ "authors": [ "bitbager", "mbugla" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:13630", "repo": "BitBagCommerce/SyliusCmsPlugin", "url": "https://github.com/BitBagCommerce/SyliusCmsPlugin/pull/237" }
gharchive/pull-request
Support sylius 1.4 Q A Bug fix? no New feature? no BC breaks? yes Deprecations? no Related tickets fixes #X, partially #Y, mentioned in #Z License MIT 🎉
2025-04-01T04:10:13.671353
2017-06-14T12:47:33
235867364
{ "authors": [ "rnevet", "tisdall" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:13631", "repo": "BitBotFactory/poloniexlendingbot", "url": "https://github.com/BitBotFactory/poloniexlendingbot/pull/390" }
gharchive/pull-request
fix and improve #365 @rnevet pointed out I was re-raising the wrong exception if Poloniex responded with a non-JSON response (in #365). I fixed that, but also made it so the error message will contain whatever is in the response from Poloniex if it's not actually JSON or is JSON but doesn't contain a 'error' component. Types of changes [x] Bug fix (non-breaking change which fixes an issue) [ ] New feature (non-breaking change which adds functionality) [ ] Breaking change (fix or feature that would cause existing functionality to change) Checklist: [x] I have read CONTRIBUTING.md [x] I fully understand Github Flow. [x] My code adheres to the code style of this project. [n/a] I have updated the documentation in /docs if I have changed the config, arguments, logic in how the bot works, or anything that understandably needs a documentation change. [n/a] I have updated the config file accordingly if my change requires a new configuration setting or changes an existing one. [n/a] I have tested the bot with no issues for 24 continuous hours. If issues were experienced, they have been patched and tested again. 2017-06-14 15:51:10 Error: HTTP Error 502: Bad Gateway Requesting returnActiveLoans. Maybe just on 502 we ignore the content? I think a 502 is essentially saying that Cloudflare can't contact the actual API. I'm quite sure you are right, can do that.
2025-04-01T04:10:13.688655
2019-06-13T17:48:02
455872450
{ "authors": [ "cgcardona", "coveralls" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:13632", "repo": "Bitcoin-com/rest.bitcoin.com", "url": "https://github.com/Bitcoin-com/rest.bitcoin.com/pull/460" }
gharchive/pull-request
v3.11.2 Refining local logging code after tests Pull Request Test Coverage Report for Build 2434 0 of 0 changed or added relevant lines in 0 files are covered. 1 unchanged line in 1 file lost coverage. Overall coverage remained the same at 70.93% Files with Coverage Reduction New Missed Lines % dist/util/winston-logging.js 1 87.5% Totals Change from base Build 2423: 0.0% Covered Lines: 2193 Relevant Lines: 2890 💛 - Coveralls
2025-04-01T04:10:13.692752
2021-11-29T10:07:24
1065844637
{ "authors": [ "ayushkumar63123", "janus" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:13633", "repo": "BitgesellOfficial/bitgesell", "url": "https://github.com/BitgesellOfficial/bitgesell/issues/75" }
gharchive/issue
Fixed some tests I have fixed some tests and have a successful build. I have requested a merge. Please merge my PR to remove errors from this repository. Thanks Welcome but this was supposed to be a part of bug bounty program issue but no problem since i did it very long ago 👍
2025-04-01T04:10:13.695596
2020-11-01T17:55:27
734024944
{ "authors": [ "BjarneBitscrambler" ], "license": "BSD-3-Clause", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:13634", "repo": "BjarneBitscrambler/OrientationSensorFusion-ESP", "url": "https://github.com/BjarneBitscrambler/OrientationSensorFusion-ESP/issues/3" }
gharchive/issue
Wireless data streaming is desirable Currently the code streams data via serial UART to the outside world. This is inconvenient for testing: the USB cable objects to being twisted into a pretzel while manipulating the sensor. Adding wireless streaming, e.g. via TCP socket over WiFi, would be handy. Updated the code (mainly changes to main.cpp and control.cpp) to configure the ESP device as a WiFi AP on startup. Connecting to the ESP on TCP port 23 causes the data to stream out on that port too (it continues to stream on the serial UART, regardless of the WiFi connection).
2025-04-01T04:10:13.697204
2016-11-07T22:56:46
187853994
{ "authors": [ "BjoernPetersen", "FelixGail" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:13635", "repo": "BjoernPetersen/MusicBot", "url": "https://github.com/BjoernPetersen/MusicBot/issues/124" }
gharchive/issue
Save Lyrics for display in App/Website As this bot is a great addition to partys, often people want to sing along. Therefore I would fancy a lyrics feature. As always in python, there are already API's supporting lyrics download: songtext PyLyrics (looks easier to handle) Seems like a great idea
2025-04-01T04:10:13.709026
2024-08-20T00:37:58
2474499414
{ "authors": [ "Bl3f", "MohamedBsh", "benjaminsicard" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:13636", "repo": "Bl3f/yato", "url": "https://github.com/Bl3f/yato/pull/5" }
gharchive/pull-request
Update dependencies 🦆 @Bl3f This PR updates several key dependencies in our pyproject.toml file to improve compatibility with Python 3.12 and resolve installation issues, particularly with DuckDB. It also adjusts the Python version range to ensure compatibility with pandas. Changes Upgraded duckdb from ^0.9.2 to ^1.0.0 Upgraded sqlglot from ^22.0.1 to ^23.0.2 Upgraded pandas from 2.0.3 to ^2.2.0 Changed Python version range from >=3.8.1,<3.13 to >=3.9,<3.13 Rationale These upgrades aim to resolve installation issues encountered when using Poetry with Python 3.12 #4 , specifically a ChefBuildError related to DuckDB installation. The newer versions of these dependencies offer better compatibility with Python 3.12 and include various improvements and bug fixes. The Python version range was adjusted to >=3.9,<3.13 to accommodate pandas 2.2.0, which requires Python 3.9 or later. Testing (base) poetry update Updating dependencies Resolving dependencies... (1.9s) Package operations: 23 installs, 0 updates, 0 removals - Installing six (1.16.0) - Installing jmespath (1.0.1) - Installing python-dateutil (2.9.0.post0) - Installing urllib3 (2.2.2) - Installing botocore (1.35.1) - Installing mdurl (0.1.2) - Installing click (8.1.7) - Installing iniconfig (2.0.0) - Installing markdown-it-py (3.0.0) - Installing mypy-extensions (1.0.0) - Installing packaging (24.1) - Installing pathspec (0.12.1) - Installing platformdirs (4.2.2) - Installing pluggy (1.5.0) - Installing pygments (2.18.0) - Installing s3transfer (0.10.2) - Installing black (24.8.0) - Installing boto3 (1.35.1) - Installing duckdb (0.10.3) - Installing isort (5.13.2) - Installing pytest (8.3.2) - Installing rich (13.7.1) - Installing sqlglot (23.17.0) Notes The Python version range has been narrowed to >=3.9,<3.13 to ensure compatibility with pandas 2.2.0. This change drops support for Python 3.8. If support for 3.8 is crucial, we may need to consider using an older version of pandas or creating separate dependency sets for different Python versions. I confirm the need to merge this PR. I tried to use yato for the first time today, and here is my experience: first I installed yato in a venv with python 3.12 --> not working because of duckdb dependancies then I used a python 3.8 venv --> not working because it could not import 'TopologicalSorter' from the 'graphlib' library then I used a python 3.9 venv --> this time I could install but I could not run yato because of a duckdb internal error (Unsupported compression function type) finally, I forked the repo, implemented the changes from this PR, installed my own yato, and everything worked ;-) Hey folks, thank you so much for your contributions and tries. I've merged @MohamedBsh PR to bump everything. I publish a new version on PyPi during the day! So happy to have you here, I'm gonna get back yato on rails very soon! Hey folks, thank you so much for your contributions and tries. I've merged @MohamedBsh PR to bump everything. I publish a new version on PyPi during the day! So happy to have you here, I'm gonna get back yato on rails very soon! Thank you for your quick resolution @Bl3f ! Has it something to do with nao :-D ? Very excited for this by the way. Also, at the moment, my experience with yato is pretty good, very useful for small adhoc tasks on csv files etc. that needs multiple transformations. As a Data Engineer, sometimes I need to get my hands dirty and I found duckdb very relevant to handle csv files. However, when I have a lot of transformation steps to perform (cleaning, joining, aggregating etc.), I find it difficult to only use one big sql script to operate my DB. I need to split it in several files for better readability. At first I added a new profile to our dbt-core project, but it meant that I also had to: install duckdb adapter (sometimes it can conflict with an existing adapter, I had some bad experiences with dbt-postgres and dbt-athena haha) update the dbt project yml declare sources etc. I stumbled on one article from Julien Hurault (https://juhache.substack.com/p/pip-install-data-stack) that was mentioning yato as a 'lightweight dbt'. Now for the actual adhoc data project I need to work on in my job, it's far easier to use yato by dumping my sql transformation files in the /sql folder ;-) Yeah, actually I use a few internal functions that I've developed in yato inside nao 🙈. Thank you so much for your feedback, when I read this I see exactly the why I developed yato, which is the simplest way to run SQL queries without the burden of a dbt setup. Once you have a few days of usage that would be interesting to get your feedback on what should we add next to yato (without adding more complexity) I think @MohamedBsh wants to do some stuff :). @benjaminsicard Don't hesitate to open issues or feature requests for the project if you have any suggestions. If you'd like to discuss further, feel free to reach out to me via email or we could set up a call. For more information about me or to get in touch, you can check out my profile here I look forward to potentially collaborating on improving yato!
2025-04-01T04:10:13.721134
2019-09-05T11:10:36
489685298
{ "authors": [ "BlackEdder", "burner" ], "license": "BSL-1.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:13637", "repo": "BlackEdder/ggplotd", "url": "https://github.com/BlackEdder/ggplotd/issues/60" }
gharchive/issue
Rotating Axes Labels I have this plot, and I would like to rotate the dates on the x axis. I couldn't find an option for it. Is there any way? Or is this a missing feature? I am afraid this is currently a missing feature. The ideal solution would be to add it to the theme (similar to how it works in ggplot2), but the theme part of ggplotd is pretty sparse and would need a significant amount of work. Merge request #60 introduced some bugs that have now been fixed (and axisLabelAngle has been renamed to axisTextAngle for consistency).
2025-04-01T04:10:13.728998
2019-04-17T22:21:27
434510772
{ "authors": [ "amanabt", "davidtmiller" ], "license": "mit", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:13638", "repo": "BlackrockDigital/startbootstrap-clean-blog-jekyll", "url": "https://github.com/BlackrockDigital/startbootstrap-clean-blog-jekyll/pull/191" }
gharchive/pull-request
Category implementation, Travis CI config, sitemap generation, and photo credits added. See demo: https://amanabt.github.io/startbootstrap-clean-blog-jekyll/ Link to the generated sitemap: https://amanabt.github.io/startbootstrap-clean-blog-jekyll/sitemap.xml See My blog: https://amanabt.github.io/blog Modifications: Website modified to react to the classification of a post in certain categories. Modifications include: Using jekyll-archives for categories Separate pages to list the posts from each categories A post can be in multiple or no category Now the Posts link in the navbar is a dropdown menu which which automatically lists the categories of the posts. Additional css for the styling of the dropdown menu also added Option to activate google analytics added. Option to display background photo credits added for page and home layouts. Option to generate sitemaps.org compliant sitemap for the Jekyll site added. Travis CI config and script to check the build of the website added. You will need to config Travis-CI for your repo to get this working. Option to add LinkedIn and Goodreads profile links at the footer. Very good additions, thanks - sorry it's taken me a while to get to these. Thanks @amanabt for the new features and to @coliff for the review. I'll polish some of these new additions up style-wise for the demo, but this is a great set of features. After reviewing a few of these changes, the category implementation is a bit rough - I've remove it for now. It's a feature that everyone may not want, same with the background image. Keeping most of the rest of it though. prism.js is for colour coding the code blocks in a blog I will have a look at the category implementation and see if I can make it better. Thanks for taking time to review the code and your suggestions
2025-04-01T04:10:13.743574
2017-10-29T20:18:28
269431074
{ "authors": [ "BlakeGuilloud", "josephmcasey", "kylevv" ], "license": "mit", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:13639", "repo": "BlakeGuilloud/ganon", "url": "https://github.com/BlakeGuilloud/ganon/issues/652" }
gharchive/issue
Fix throttle function There is currently a method called throttle that lives in /lib/throttle.js. It is incomplete and needs to be fixed! Throttle takes a callback function and wait time (ms) and returns a throttled version of the callback. A throttled function can only invoke the original callback as frequently as is specified by the 'wait' argument. Acceptance Criteria: Running yarn test throttle results in tests passing. You have written a skeleton method for someone else to work on. You have written tests surrounding your skeleton method. Running yarn lint does not print any errors to the console! Optional: write one or two more tests surrounding the method you are solving to account for potential edge cases. Please include the skeleton of a new method + an accompanying test for someone else to work on at the time of creating a pull request! A pull request will most likely be denied if it does not contain a skeleton method for someone else to work on! For more information, please read the Contributing Guide. Thank you so much for your contribution! I'll pick this one up! Thanks @josephmcasey ! @kylevv , thanks for the throttle function! I submitted a pull request #720 . Looking forward to my review!
2025-04-01T04:10:13.785354
2022-11-20T19:50:53
1457024150
{ "authors": [ "JavaCafe01", "rwtallant13" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:13640", "repo": "BlingCorp/bling", "url": "https://github.com/BlingCorp/bling/issues/185" }
gharchive/issue
Request: Horizontal mstab I would love a horizontal stacked mstab for my vertical monitor. I spent a little while trying to rewrite the file and get it working myself but I'm a lua novice still. Thanks. @rwtallant13 Open up a draft PR! We can help along the way.
2025-04-01T04:10:13.810325
2018-10-16T14:19:26
370636734
{ "authors": [ "philderbeast" ], "license": "BSD-3-Clause", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:13641", "repo": "BlockScope/hpack-dhall", "url": "https://github.com/BlockScope/hpack-dhall/issues/7" }
gharchive/issue
Couldn't match type ‘Value’ with ‘([String], Value)’ with ghc-8.6.1. With stack-8.6.1.yaml; resolver: nightly-2018-10-15 packages: - '.' extra-deps: - dhall-1.18.0 - cborg-<IP_ADDRESS> - dhall-json-1.2.4 - hpack-0.31.0 - megaparsec-7.0.1 - repline-<IP_ADDRESS> - serialise-<IP_ADDRESS> - infer-license-0.2.0 hpack-dhall> stack build --stack-yaml=stack-8.6.1.yaml ... [1 of 3] Compiling Hpack.Dhall ( src/Hpack/Dhall.hs, .stack-work/.../Dhall.o ) /Users/pdejoux/dev/src/haskell/hpack-dhall/src/Hpack/Dhall.hs:34:69: error: • Couldn't match type ‘Value’ with ‘([String], Value)’ Expected type: FilePath -> IO (Either String ([String], Value)) Actual type: FilePath -> IO (Either String Value) • In the first argument of ‘Hpack.setDecode’, namely ‘decodeDhall’ In the second argument of ‘Hpack.hpack’, namely ‘(Hpack.setDecode decodeDhall options)’ In the expression: Hpack.hpack verbose (Hpack.setDecode decodeDhall options) | 34 | Just (verbose, options) -> Hpack.hpack verbose (Hpack.setDecode decodeDhall options) | ^^^^^^^^^^^ Completed 54 action(s). Fixed by #8.
2025-04-01T04:10:13.899454
2019-12-29T02:41:53
543307751
{ "authors": [ "gyu-don" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:13650", "repo": "Blueqat/Blueqat", "url": "https://github.com/Blueqat/Blueqat/issues/76" }
gharchive/issue
Numba backend returns wrong statevector from blueqat import Circuit c = Circuit() c.h[0] c.cx[0, 1]; c.z[2]; import numpy as np v1 = c.run_with_numpy() v2 = c.run_with_numba() v3 = c.run_with_qgate() print('numpy') print(v1) print('numba') print(v2) print('qgate') print(v3) Expected: All backends return same result Actual: numpy [0.70710678+0.j 0. +0.j 0. +0.j 0.70710678+0.j 0. +0.j 0. +0.j 0. +0.j 0. +0.j] numba [ 0.70710678+0.j 0.70710678+0.j 0. +0.j 0. +0.j -0. +0.j -0. +0.j -0. +0.j -0. +0.j] qgate [0.70710678+0.j 0. +0.j 0. +0.j 0.70710678+0.j 0. +0.j 0. +0.j 0. +0.j 0. +0.j] Easier case: c = Circuit().x[0].cx[0, 1].z[2] c = Circuit().i[2].x[0].cx[0, 1] is easiest case. Maybe, when n_qubit is 3, cx is not worked properly.
2025-04-01T04:10:13.922385
2023-05-16T09:01:53
1711722048
{ "authors": [ "jwindhager", "nathansteenbuck" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:13651", "repo": "BodenmillerGroup/steinbock", "url": "https://github.com/BodenmillerGroup/steinbock/issues/184" }
gharchive/issue
Print specific Error message for missing ROI I am analysing a mcd with multiple ROIs and stumbled upon this error: 2023-05-15 15:11:13,563 WARNING steinbock.preprocessing.imc - Error reading acquisition 72 from file /tmp/tmp63oinnto/6483_Immune/6483_Immune.mcd: MCD file '6483_Immune.mcd' corrupted: invalid acquisition image data offsets However, instead of issues with the image offsets, the .txt-file of ROI 72 doesn`t exist. My suggestion would be to print a more specific error message catching that exception :) I understand that the message is not informative in your case, however: steinbock uses mcd files as primary input if the mcd file is broken AND a matching txt file is available, it will try to recover the data if not, the error message giving details why the mcd file is broken is logged (which seems to be the case here) One could of course think of being more verbose and add a message to the log saying that the mcd file is broken AND that no matching txt file has been found for recovery (basically add an else branch to https://github.com/BodenmillerGroup/steinbock/blob/bc10207a5a690254bb4e65089c06004fe3f19b05/steinbock/preprocessing/imc.py#L369C53-L404). I think this would be a good addition @Milad4849
2025-04-01T04:10:13.937236
2016-09-23T19:49:43
178959497
{ "authors": [ "BohdanTkachenko" ], "license": "mit", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:13652", "repo": "BohdanTkachenko/webpack-split-by-path", "url": "https://github.com/BohdanTkachenko/webpack-split-by-path/issues/24" }
gharchive/issue
Correctly split package versions for webpack 1 and webpack 2 Publish 1.x.x versions for webpack 1 Publish 2.x.x versions for webpack 2 Related to #20 Master branch is for webpack 2.x.x For webpack 1.x.x there is a webpack2 branch Also, published to NPM versions 1.0.0 and 2.0.0
2025-04-01T04:10:13.964684
2020-02-19T03:04:33
567284830
{ "authors": [ "Jazb", "ssddanbrown" ], "license": "mit", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:13653", "repo": "BookStackApp/BookStack", "url": "https://github.com/BookStackApp/BookStack/issues/1907" }
gharchive/issue
Route method not allowed on saml/acs post When the request from SAML Google provider pass to saml/acs endpint, I am received an error with The POST method is not supported for this route. Supported methods: GET, HEAD. Found on Symfony\Component\HttpKernel\Exception\MethodNotAllowedHttpException. The instance of bookstack has been updated from old version follow the instructions of documentation. I am executed follow commands: git pull origin release && composer install && php artisan migrate php artisan cache:clear && php artisan config:clear && php artisan view:clear php artisan bookstack:regenerate-permissions php artisan bookstack:regenerate-search php artisan optimize:clear chown $user: -R . # To fix perm At list the route from artisa, saml/acs allowed post: php artisan route:list | grep saml Hi @Jazb, My guess is that google is not redirecting you to the correct endpoint. Are you familiar with using browser developer tools at all? If so, you could open the "Network" tab and ensure the redirect from Google is sending you to the correct endpoint with the correct method. Also, Just to be sure, Are you using /saml2/acs? since your post states /saml/acs... Here I attached the error, I saw through the network that the post is correct goes to the appropriate endpoint and in addition to this in the log of the nginx it looks like the same path is correct. https://flareapp.io/share/xmN0xb70 This is the error itself, any other possibility that I can explore? Hi @Jazb, Apologies for this, It looks like I've mis-typed this in the documentation. The endpoint should be /saml2/acs instead of /saml2/asc. I've labelled this to remind myself to update the docs when I next can do so. Docs now updated so I'll close this. If you continue to have issues feel free to open a new issue referencing this one.
2025-04-01T04:10:13.975348
2021-04-21T14:06:27
863931122
{ "authors": [ "Bolthier", "ssddanbrown" ], "license": "mit", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:13654", "repo": "BookStackApp/BookStack", "url": "https://github.com/BookStackApp/BookStack/issues/2698" }
gharchive/issue
Feature: Custom group and personal user roles Describe the feature you'd like For normal users to give permissions for items to certain users it's always necessary to create user roles on a global level through an admin. Be it personal user roles or user groups. I'ld like a feature where users would always have a personal user role which can be given permissions through a dropdown menu on any custom permission site. The second feature would allow normal users to create custom groups which then could be used in custom permissions. The user with ownership of this custom group (and any admin with permission to manage user roles) should be able to add users to the group. This feature should be optional and configurable through the system permissions of a role through an optional check like "Manage own custom group roles". Besides the option to add or remove users of this custom group, there should be an option to change the visibility to "Group owner", "Group members" and "Public". This way a custom group role would only appear in custom permissions for users with said visibility. Should a user loose visibility for a custom user group, but has set active permissions for this group for one of their items, the custom user group should still be visible. Lastly a user should have the option to delete a custom group role. Some minor improvements I would also suggest: Sort roles alphabetically in custom permissions and roles overview. Right now they are sorted by insert date. A filter option to easily search for a role. Mostly relevent for the personal user roles. Option to add users directly while managing a global role. Overview of roles existing and active roles for every role type. Describe the benefits this feature would bring to BookStack users Improved options for users to manage their own custom permissions for whole books, shelves and pages without an admin or the need to give normal users permissions to manage global user roles. Additional context Was thinking about these features for some time now and just wanted them off my chest. Keep up the good work! Thanks for the suggestions and feedback @Bolthier. To be honest, multi-request issues are quite hard to manage so I'm going to close this off otherwise it'll remain open forever. Per-user entity permissions have been requested in #1747 and are something I'd very much hope to achieve during the permission system review later in the roadmap. I've added a reference to these suggestions within #410 since that'll be a core part of changing up the permission user interface (Again, planned for permission review on roadmap). In regards to the "Custom Group Roles", to be honest it looks like a bit of a minefield, I'm having trouble getting my head around the various abilities, further permissions and added UI that may be needed to support that. The permission and role system is already fairly complex, adding user-level permissions will take that a step further then I think I'll be at the limit of my desired complexity for a good while. Such a system seems like it'll introduce a fair amount of new logical confusion scenarios to handle. You've explained this functionality out in detail but you have not described what this would fundamentally achieve/allow within BookStack. Maybe if you could explain that we might be able to come up with a simpler path to solving the core issue you have. In regards to the addition improvements: Sort roles alphabetically in custom permissions and roles overview. Right now they are sorted by insert date. That's an awesome idea which I'm sure others may appreciate, while being pretty darn easy & quick to implement. Feel free to re-open that as a new issue if still desired. A filter option to easily search for a role. Mostly relevant for the personal user roles. As part of #410, I envision there being no big list by default. Instead you'd search for an existing role/user and then bring them into the "list of checkboxes". The overrides would then only apply to the roles you bring into the list. Will probably have some form of an "Everyone else" option within this as well. It will be a fairly large shift to managing permissions yet ideally a more intuitive one. Option to add users directly while managing a global role. We could do this, but it'd be one of those things that is a little annoying to implement via the design of how BookStack is built, since you're wanting two different modification abilities on a single view. Would require AJAX/Dynamic handling for the user assignment parts since you could not do page refreshes since the main priority of that page would be intended for editing the role itself. Not impossible though and I can understand the benefit. Overview of roles existing and active roles for every role type. I'm not sure I understand this, Is this in relation to the two core additions requested? Per-user entity permissions have been requested in #1747 and are something I'd very much hope to achieve during the permission system review later in the roadmap. Nice. I'll look forward to it. In regards to the "Custom Group Roles", to be honest it looks like a bit of a minefield,[...] Agree with you on that one. With the permissions system for review on the roadmap I'll just wait and see. Sort roles alphabetically in custom permissions and roles overview. Right now they are sorted by insert date. That's an awesome idea which I'm sure others may appreciate, while being pretty darn easy & quick to implement. Feel free to re-open that as a new issue if still desired. I'll do that. The other minor suggestions would only be needed with a bulkier system.
2025-04-01T04:10:14.034005
2022-03-21T21:02:23
1175936386
{ "authors": [ "BottlecapDave", "itsfja" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:13657", "repo": "BottlecapDave/HomeAssistant-OctopusEnergy", "url": "https://github.com/BottlecapDave/HomeAssistant-OctopusEnergy/issues/45" }
gharchive/issue
Gas meter not discovered/no sensors generated for gas. Describe the bug Missing gas meter sensors To Reproduce Added integration, current and previous electricity meter added, no gas meter A gas meter may be seen to be present when running an API curl command https://api.octopus.energy/v1/accounts/A-xxxxxxx/ (the same key and acount added to the integrations. The meter is SMETS1 but is adopted )I tried ticked and unticked). Expected behavior Gas meter sensors to be added. Home Assistant Logs I can't find anything useful. Sorry to hear you're having issues. I've just released a new version with some additional debugging information around adding sensors. To get this information, the first thing to do is increase the log levels for the component. This can be done by setting the following values in your configuration.yaml file. logger: logs: custom_components.octopus_energy: info If you don't have access to this file, then you should be able to set the log levels using the available services. Once done, you'll need to reload the integration and then check the "full home assistant log". You should then see entries associated with this component stating either sensors were added, skipped or no sensors were available at all. The identifiers of the sensors should then be checked against your Octopus Energy dashboard to verify the correct sensors are being picked up. Thanks for the prompt response. I deleted the integration, downloaded the latest version, installed and there are now no sensors. The major error in the log is now; Logger: homeassistant.components.sensor Source: custom_components/octopus_energy/sensor.py:132 Integration: Sensor (documentation, issues) First occurred: 20:15:42 (3 occurrences) Last logged: 20:16:17 Error while setting up octopus_energy platform for sensor Traceback (most recent call last): File "/usr/src/homeassistant/homeassistant/helpers/entity_platform.py", line 249, in _async_setup_platform await asyncio.shield(task) File "/config/custom_components/octopus_energy/sensor.py", line 97, in async_setup_entry await async_setup_default_sensors(hass, entry, async_add_entities) File "/config/custom_components/octopus_energy/sensor.py", line 132, in async_setup_default_sensors _LOGGER.info(f'Skipping electricity meter due to no active agreement; mpan: {point["mpan"]}; serial number: {meter["serial_number"]}') UnboundLocalError: local variable 'meter' referenced before assignment I'll try again using branch main (it defaulted to develop. Oops. Nope, that looks like my dodgy logic. There is a new version available at https://github.com/BottlecapDave/HomeAssistant-OctopusEnergy/releases/tag/v4.1.1. Based on where it's erroring, it looks like you have some meters without active agreements. Have you had sensors replaced? Do you know what kind of tariff you're on? On 8.1.0 - I can see 8 sensors, all related to the electricity meters, I am unable to find the word "gas" in the log. I am on the TEP, that might entertain you for a bit, they just added a second tariff to the one meter. From Octopus; Gas: Octopus 12M Fixed Electricity: Sorry, we weren't able to get the details for your tariff. (I'm betting it will be "Tesla Energy Plan Export" tomorrow) Electricity: Tesla Energy Plan Import. The newly added information doesn't seem to appear in the HA log panel. You'll need to click "Load Full Home Assistant Log" to get the debug logs. As mentioned, 4.1.1 should fix your previous error. On 4.1.1 - Logger: homeassistant.components.sensor Source: custom_components/octopus_energy/sensor.py:151 Integration: Sensor (documentation, issues) First occurred: 21:15:43 (1 occurrences) Last logged: 21:15:43 Error while setting up octopus_energy platform for sensor Traceback (most recent call last): File "/usr/src/homeassistant/homeassistant/helpers/entity_platform.py", line 249, in _async_setup_platform await asyncio.shield(task) File "/config/custom_components/octopus_energy/sensor.py", line 97, in async_setup_entry await async_setup_default_sensors(hass, entry, async_add_entities) File "/config/custom_components/octopus_energy/sensor.py", line 151, in async_setup_default_sensors _LOGGER.info(f'Skipping gas meter due to no active agreement; mpan: {point["mpan"]}; serial number: {meter["serial_number"]}') KeyError: 'mpan' From the full log; 2022-03-22 21:15:43 INFO (MainThread) [custom_components.octopus_energy.sensor] Skipping electricity meter due to no active agreement; mpan and serial number: 2022-03-22 21:15:43 INFO (MainThread) [custom_components.octopus_energy.sensor] agreements: [] 2022-03-22 21:15:43 INFO (MainThread) [custom_components.octopus_energy.sensor] Adding electricity meter; mpan: and; serial number: 2022-03-22 21:15:43 INFO (MainThread) [custom_components.octopus_energy.sensor] Adding electricity meter; mpan: and; serial number: Still no Gas. Much like half the word at the moment. 🙄 Another fix inbound. At some point I should deploy some code that works in this area. Once deployed, it should indicate the gas sensor that is being skipped and the agreements associated with the sensor which should indicate why it's being skipped. Fast coder! It looks like it's not picking up the final agreement for your gas despite it still being active, so that's where I'll take a look. In regards to logs, what you have provided is sufficient. However, I think I'm going to expose an overarching device so that the relevant information can be extracted more easily using https://www.home-assistant.io/integrations/diagnostics/ I've just released https://github.com/BottlecapDave/HomeAssistant-OctopusEnergy/releases/tag/v4.1.3, which should fix your issue. 4.1.3 Unable to get past Setup your basic account information screen. "Account information was not found" 4.1.3 2022-03-23 19:28:57 INFO (MainThread) [custom_components.octopus_energy.sensor] Skipping electricity meter due to no active agreement; mpan: 002; serial number: 207 2022-03-23 19:28:57 INFO (MainThread) [custom_components.octopus_energy.sensor] agreements: [] 2022-03-23 19:28:57 INFO (MainThread) [custom_components.octopus_energy.sensor] Adding electricity meter; mpan:<PHONE_NUMBER>768; serial number: 639 2022-03-23 19:28:57 INFO (MainThread) [custom_components.octopus_energy.sensor] Adding electricity meter; mpan: 768; serial number: 207 2022-03-23 19:28:57 INFO (MainThread) [custom_components.octopus_energy.sensor] Adding gas meter; mprn: 101; serial number: 171 2022-03-23 19:28:57 INFO (MainThread) [custom_components.octopus_energy.sensor] Adding gas meter; mprn: 101; serial number: 800 2022-03-23 19:28:57 INFO (MainThread) [custom_components.octopus_energy.sensor] Updating OctopusEnergyGasCurrentRate 2022-03-23 19:28:57 INFO (MainThread) [custom_components.octopus_energy.sensor] Updating OctopusEnergyGasCurrentRate 2022-03-23 19:28:57 INFO (MainThread) [custom_components.octopus_energy.sensor] Updating OctopusEnergyElectricityPreviousRate for '768/639' 2022-03-23 19:28:57 INFO (MainThread) [custom_components.octopus_energy.sensor] Updating OctopusEnergyElectricityCurrentRate for '768/207' 2022-03-23 19:28:57 INFO (MainThread) [custom_components.octopus_energy.sensor] Updating OctopusEnergyElectricityPreviousRate for '768/207' 2022-03-23 19:28:57 INFO (MainThread) [custom_components.octopus_energy.sensor] Updating OctopusEnergyElectricityCurrentRate for '768/639' 2022-03-23 19:28:58 INFO (MainThread) [custom_components.octopus_energy.sensor] Calculated previous gas consumption for '101/800'... 2022-03-23 19:28:58 INFO (MainThread) [custom_components.octopus_energy.sensor] Calculated previous electricity consumption for '768/207'... 2022-03-23 19:29:59 INFO (MainThread) [custom_components.octopus_energy.sensor] Calculated previous gas consumption cost for '101/800'... 2022-03-23 19:30:57 INFO (MainThread) [custom_components.octopus_energy.sensor] Updating OctopusEnergyElectricityPreviousRate for '768/639' 2022-03-23 19:30:57 INFO (MainThread) [custom_components.octopus_energy.sensor] Updating OctopusEnergyElectricityCurrentRate for '768/207' 2022-03-23 19:30:57 INFO (MainThread) [custom_components.octopus_energy.sensor] Updating OctopusEnergyElectricityPreviousRate for '768/207' 2022-03-23 19:30:57 INFO (MainThread) [custom_components.octopus_energy.sensor] Updating OctopusEnergyElectricityCurrentRate for '768/639' So ... Lots of meters to look at, the gas is present. Some ghosts (I'll just hide them. Sorted, Thanks for a very prompt fix. Glad it worked :)
2025-04-01T04:10:14.049250
2021-09-01T18:40:51
985480729
{ "authors": [ "atmofunk", "joshtynjala" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:13658", "repo": "BowlerHatLLC/vscode-as3mxml", "url": "https://github.com/BowlerHatLLC/vscode-as3mxml/issues/549" }
gharchive/issue
Extension failed to start So this just popped up on me this morning and I'm at a loss for what to do about it: Literally nothing changed (afaik) from when i was working yesterday until this morning. No windows update, no vscode update, no extension update. I've restarted multiple times, rebooted my pc, and updated java (after i discovered the problem, not before). Any ideas? Need more info? Open VSCode's command palette and run the "Developer: Toggle Developer Tools" command. This should open Electron's browser developer tools. In the console, it might have more detailed error messages.
2025-04-01T04:10:14.091124
2017-09-20T06:18:45
259051857
{ "authors": [ "SeVeNDuS", "ethanneff", "scottwb", "sirpy" ], "license": "mit", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:13659", "repo": "BranchMetrics/cordova-ionic-phonegap-branch-deep-linking", "url": "https://github.com/BranchMetrics/cordova-ionic-phonegap-branch-deep-linking/issues/369" }
gharchive/issue
Error: BRANCH SDK: Invalid "android-prefix" in in your config.xml I get this error when executing cordova build android but I don't have that tag in my config.xml branch-cordova-sdk 2.6.12 Something is wrong in that version Regards Hello, This error is generated from here. An example of a correct android-prefix is found here. Keep in mind, this value is only needed if you are using a bnc.lt or custom link-domain (non app.link link). Can you give me an example of your branch-config within your config.xml so I can troubleshoot your issue further? Thanks, Thanks for the quick reply I use custom domains then the error is produced by that anyway, Which value should I put in Android-prefix?? I understand how this can be hard to find (even I am having a difficult time finding it on our dashboard). Your best bet is to create a link with your branch_key using our API (this is a cURL command you can copy-paste into your terminal). curl -XPOST https://api.branch.io/v1/url -d '{ "branch_key": "key_live_piAfzOlymTsKZwXUHo6utjilBrimGZRa", "channel": "test" }' {"url":"http://link.eneff.io/LVeu/LEa08QkyAG"} The android-prefix in this example response is /LVeu. You need to add this to your config.xml. Thanks, Found it. If you have a custom link domain enabled, then you can find your android-prefix on https://branch.dashboard.branch.io/link-settings Best, Thanks!! Solved!! The above description doesnt seem valid anymore. We are trying to configure our own domain as the applink on android, so when clicking on a link to our domain the app will open. How do we achieve that? Custom link domains can be configured here: https://docs.branch.io/pages/dashboard/integrate/#change-link-domain Custom link domains will have a pathPrefix attached to them (e.g. /LVeu) which will need to be added to your config.xml configuration. https://docs.branch.io/pages/apps/cordova-phonegap-ionic/#optional-app-config I guess my question is not how to use custom domains but how do i get regular web links to our website to open the app itself, on ios we can easily add an applink domain and it works together with branch app.link domain but how do we do that on android? On Sun, Oct 22, 2017 at 7:21 PM, Ethan Neff<EMAIL_ADDRESS>wrote: Custom link domains can be configured here: https://docs.branch.io/pages/ dashboard/integrate/#change-link-domain Custom link domains will have a pathPrefix attached to them (e.g. /LVeu) which will need to be added to your config.xml configuration. https://docs.branch.io/pages/apps/cordova-phonegap-ionic/# optional-app-config — You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/BranchMetrics/cordova-ionic-phonegap-branch-deep-linking/issues/369#issuecomment-338489781, or mute the thread https://github.com/notifications/unsubscribe-auth/AAo9d0T64V7_e7OKLf1du9fb4iULoGIRks5su2t9gaJpZM4PdZJ- . @sirpy The default behavior of Branch is: Your app is not installed User clicks on a Branch deep link Device navigates to the fallback (e.g. an app store or website) User installs and opens app Branch passes deep link data into app Your app is installed User clicks on a Branch deep link Device opens app Branch passes deep link data into app This behavior should work regardless of Android or iOS. Of course, this behavior only works for your Branch links (deep links). Your regular web links on your website (hyperlinks) will not be able to open your app. This is because Branch deep links have thousands of unique redirections built into them. For example: If Branch deep link + clicked on Facebook mobile app + clicked on Android M or greater + your app is installed, then open your app with your URI Scheme your_custom_uri_scheme://open?link_click_id=link-44270354495348291 Since converting all your web links to Branch links may be a daunting task, you can always integrate our Web SDK to add a banner to your website to convert web users to app users. Thanks, it's not the question. in other deeplink plugins such as universal-links plugin for cordova you define YOUR OWN domain as an applink domain. so any click on a link of your domain will open the app if it is installed. so for us "knil.co", we want every click on knil.co to open the app if it is installed. now since you say branch is not compatible with universal links plugin we need some other way to configure android (on cordova) to open links of "knil.co" in our app. for IOS on cordova we simply add "knil.co" to the applink domains list where we add "X.applink.link"(ie branch domain) and they both work, clicking on knil.co links on ios opens our app. clicking on applink.link opens our app also with branch sdk trigger. HOW DO WE DO THIS FOR ANDROID? Branch seems to no longer put prefixes on our links, and the dashboard UI @ethanneff shows above no longer includes an indication of what your prefix should be. Same with the curl command he gives. However, the code still requires that you have an android-prefix, and it has to match a specific regex, so it cannot be blank. If you put in a fake one that that satisfies the regex, you can build, but your links won't work as App Links on Android. This seems like a bug in https://github.com/BranchMetrics/cordova-ionic-phonegap-branch-deep-linking/blob/495e6c74134fb22e9cb479a9a372f60794b1ab87/src/scripts/android/updateAndroidManifest.js#L287 to me. Should at least let it be optional, I think. Add xmlns:android="http://schemas.android.com/apk/res/android" to the <widget> tag at the top of config.xml for your app. Add add a block like this inside the <platform name="android"> block of your app's config.xml <custom-config-file target="AndroidManifest.xml" parent="./application/activity[@android:name='MainActivity']/intent-filter[@android:name='io.branch.sdk.AppLink']"> <data android:host="my.customdomain.com" android:scheme="https" /> </custom-config-file> If you are using an older version of the custom-config-file plugin, change <custom-config-file> to just <config-file> Rebuild and keep your fingers crossed. Check platform/android/AndroidManifest.xml to see if it looks like you want it to...that is... without a prefix attribute in the <data> element we overwrote in step 4. (That's really all this does...replace the faulty element the branch plugin generates with your fake prefix that you don't need.) You are correct. Prefixes are no longer needed for custom link domains (they were unsightly). I'll push a hotfix today to remove the requirement for prefixes on custom link domains
2025-04-01T04:10:14.097458
2016-07-10T19:40:20
164730515
{ "authors": [ "aaustin", "athibaud", "rt2zz" ], "license": "mit", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:13660", "repo": "BranchMetrics/react-native-branch-deep-linking", "url": "https://github.com/BranchMetrics/react-native-branch-deep-linking/pull/26" }
gharchive/pull-request
expose email subject for share sheet @rt2zz This was requested on the core library and I had forgotten we already do expose it. This makes the interface a bit more consistent 👍 looks great will include in a minor version release early next week. thanks! though now we have a messageHeader prop which is android only that appears as headers on some share patforms (fo eg. Slack) but also as email's subject.. + an iOS only emailSubject prop which is only used as email's subject.. maybe we can unify this a little? only expose messageHeader and use that as emailSubject on iOS for now.. ?
2025-04-01T04:10:14.101312
2019-09-15T08:19:25
493709495
{ "authors": [ "silky" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:13661", "repo": "BraneShop/showreel", "url": "https://github.com/BraneShop/showreel/issues/153" }
gharchive/issue
3D Ken Burns Effect from a Single Image ( needs to be a gif ) https://arxiv.org/pdf/1909.05483.pdf http://sniklaus.com/papers/kenburns
2025-04-01T04:10:14.129877
2022-01-28T14:28:03
1117468779
{ "authors": [ "Breakthrough", "scuba14" ], "license": "BSD-3-Clause", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:13662", "repo": "Breakthrough/PySceneDetect", "url": "https://github.com/Breakthrough/PySceneDetect/issues/257" }
gharchive/issue
Using an inmemory Bytestream (io.BytesIO object) as videoinput Description of Problem & Solution I am using a scraper which is downloading and processing videos. While downloading the videos are stored into io.BytesIO objects which are hold im memory by python. The problem is that I can not pipe this Bytestream into PySceneDetect because the Videomanager class is based on OpenCV VideoCapture class. A fix would be to write the videos to a disk or to a temporary file but thats not the sense of pipeline processing . A solution would be to change the video_manager class so it can also accept inmemory BytesIO objects or os-Pipeline Inputs. Maybe I am missing something and its already possible but with the change of the video_manager class the following would be possible: Process video with ffmpeg etc ... --> stdout Pipe/ named Pipe Read Video from stdout in io.BytesIO Make a scene detection on io.BytesIO or directly from step 1 (stdout Pipe/ named Pipe) Get Scenes from PySceneDetect and apply them on the video hold im meory (io.BytesIO) Do some other pipeline stuff without writing the video to a file ... This functionality could even go further and a livestream captured with ffmpeg could be live processed and scene detected thanks to: https://github.com/Breakthrough/PySceneDetect/issues/5 Solution: Implement stdin pipe and/or BytesIO support in video_manager class This may be possible once #213 is implemented via PyAV: https://pyav.org/docs/stable/api/_globals.html#av.open Parameters: file (str) – The file to open, which can be either a string or a file-like object. That should work for your use case, although I have not tested it myself yet. Hope this helps for now! I just did a quick test and this does seem to work with the new VideoStreamAv backend. I'll add a constructor parameter to allow this to work for v0.6. Will be included in v0.6 release. Added tests to demonstrate example usage: https://github.com/Breakthrough/PySceneDetect/blob/v0.6/tests/test_backend_pyav.py
2025-04-01T04:10:14.195200
2016-05-26T15:27:49
157011290
{ "authors": [ "ryantmer" ], "license": "apache-2.0", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:13663", "repo": "Brightspace/d2l-my-courses-ui", "url": "https://github.com/Brightspace/d2l-my-courses-ui/pull/21" }
gharchive/pull-request
US69204 pinning API call In order for the course tile to do the API call in a cleaner way, I reworked the tile so that the widget just passes in the entire enrollment Entity. Changing the enrollment (i.e. setting it) triggers the observer to parse the enrollment and update the pin button as required. Clicking the pin/unpin button triggers the API request, and updates the pin button as required. The API reply also updates the pin button as required. Disclaimer: haven't tested this in the LMS yet. Once I overcome my SQL woes, I'll also take a look at amping up the demo so that all of these features will work in that by mocking the token/API requests. Unfortunately the pin/unpin icon still has somewhat wonky spacing, but this is caused by the SVG itself rather than any sort of padding/margin, so I'll have to check with @thehappypixel on that once he gets back.
2025-04-01T04:10:14.215441
2024-07-22T11:30:54
2422639630
{ "authors": [ "liias", "unikitty37" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:13664", "repo": "Browsers-software/browsers", "url": "https://github.com/Browsers-software/browsers/issues/193" }
gharchive/issue
Easier access to rules editor Is your feature request related to a problem? Please describe. In order to edit a rule, I have to click on a button in the corner of the Browsers pop-up, then choose Preferences from the menu — which means I must first find a link that isn't covered by a rule in order to cause the popup to appear. Describe the solution you'd like I'd like Browsers to have an icon in the system tray on KDE so I can right-click on it and access the preferences directly from there. Describe alternatives you've considered I've tried launching Browsers manually, but no window opens — the icon appears in the task bar briefly, then vanishes. Having this bring the preferences window up would be an acceptable alternative (and possibly easier to implement in a cross-platform way :) At the moment I have a link to https://example.com that I've sent to myself in Slack; that allows me to go to my notes and click the link (as long as I never set a rule for example.com, of course!) But this only works particularly well if I don't have a load of notes to myself in Slack, as otherwise I have to find it first. Additional context I'm not sure how it works on other OSes — on macOS, it would need to run as a menu bar app if it doesn't already. Hey there! Thanks for feedback! On macOS, just launching Browsers on its own will ignore all rules, allowing to access Preferences. You mention KDE - which Linux distro are you using? One of my goals is to keep the app not running in the background, so unfortunately the system tray is not going to have it. I'm using Kubuntu 24.04. Launching Browsers on its own also works as on macOS here, so I'm not sure why it didn't when I tried it beforehand. Possibly some window rule had kept something on top of it — but it works in a way that allows me to get at the rules easily, so I'll close this. Thanks for your time!
2025-04-01T04:10:14.230567
2021-08-17T00:34:28
972198806
{ "authors": [ "BtbN", "kedaitinh12" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:13665", "repo": "BtbN/FFmpeg-Builds", "url": "https://github.com/BtbN/FFmpeg-Builds/issues/96" }
gharchive/issue
Possible windows x86 build?? Can you add x86 ver for ffmpeg?? You can build 32 bit yourself with the scripts if you really need it, but generally, you should be using 64bit. To repeat myself again: 32bit would waste asset space and would result in other more useful builds to be removed. Plus, a whole bunch of people would download the 32bit builds because they just don't know better, and end up with a slower build and missing features. Thanks for reply
2025-04-01T04:10:14.249593
2022-07-12T06:33:22
1301603334
{ "authors": [ "BuIlDaLiBlE", "cloudyybaka" ], "license": "Unlicense", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:13666", "repo": "BuIlDaLiBlE/BetterHI3Launcher", "url": "https://github.com/BuIlDaLiBlE/BetterHI3Launcher/issues/55" }
gharchive/issue
help Can you elaborate about this? How many times you tried to download? Can you elaborate about this? How many times you tried to download? I have only tried once and that is when I got the error, I am now trying a second time, hopefully, it works. I have only tried once and that is when I got the error, I am now trying a second time, hopefully, it works. Is it ok now? I have only tried once and that is when I got the error, I am now trying a second time, hopefully, it works. Is it ok now? Nope, I left it to download overnight and I came back and it is still on the lan ch screen I have only tried once and that is when I got the error, I am now trying a second time, hopefully, it works. Is it ok now? Nope, I left it to download overnight and I came back and it is still on the launch screen I am gonna try to download it again I have tried many times but it still says the same thing, what do you think I can do to fix it? Thanks for confirming. I could not reproduce the issue, can you try downloading to a different drive? Wait, I could reproduce it just now. Hang on while I investigate this... Can you please send a screenshot from This PC where all the available drives are shown? Also, can you try to download the game again with this test build and report the error it will show? https://bpnet.work/files/bbh3l/TEST/BetterHI3Launcher.exe BetterHI3Launcher v1.3.20220713.0 [TEST] Working directory: C:\Users\Lueanne Estrada\Desktop\MICAH MOTTLEY OS version: Windows 10 Pro (Version 21H2, Build 19044.1826) OS language: en-US Launcher language: en (autodetect) WARNING: Bp Network connection error, attempt №2... Using server: Global Using mirror: HoYoverse Checking for game update... Ready to install the game Installation directory selected: C:\Users\Lueanne Estrada\Desktop\MICAH MOTTLEY\honkai test\Honkai Impact 3rd Starting to download game archive: BH3_v5.8.0_3746462df53b.7z (https://d2wztyirwsuyyo.cloudfront.net/ptpublic/bh3_global/20220627152836_ZkkbE9mzDSszbFJz/BH3_v5.8.0_3746462df53b.7z) CRITICAL ERROR: Unhandled exception occurred. Stack trace: System.ObjectDisposedException: Cannot access a closed file. at System.IO.__Error.FileNotOpen() at System.IO.FileStream.get_Length() at Hi3Helper.Http.Http.<>c.b__20_1(SessionAttribute x) at System.Linq.Enumerable.WhereSelectListIterator2.MoveNext() at System.Linq.Enumerable.Sum(IEnumerable1 source) at Hi3Helper.Http.Http.d__20.MoveNext() --- End of stack trace from previous location where exception was thrown --- at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw() at System.Windows.Threading.ExceptionWrapper.InternalRealCall(Delegate callback, Object args, Int32 numArgs) at System.Windows.Threading.ExceptionWrapper.TryCatchWhen(Object source, Delegate callback, Object args, Int32 numArgs, Delegate catchHandler) Can you please send a screenshot from This PC where all the available drives are shown? Thanks for the feedback, this will be investigated. Should be resolved as of v1.4.20230111.0. Feel free to open another issue if you encounter any more errors.
2025-04-01T04:10:14.326524
2022-11-03T22:38:28
1435335229
{ "authors": [ "Builditluc", "nunotexbsd" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:13668", "repo": "Builditluc/wiki-tui", "url": "https://github.com/Builditluc/wiki-tui/issues/90" }
gharchive/issue
[BUG] can't disable logging General Information Version: 0.5.1 Installation Method: source Operating System: FreeBSD Describe the bug ~/.config/wiki-tui/config.toml: [logging] enabled = false log_dir = 'wiki_tui.log' log_level = 'INFO' log is created. Checklist [x] checked other issues for the same bug [x] read CONTRIBUTING.md Thank you for your bug report! I've created a patch that fixes this issue (the logfile won't be created now if logging is disabled) and made a PR. It'll be available in the next release!
2025-04-01T04:10:14.411044
2018-03-23T07:26:39
307926686
{ "authors": [ "Keboo", "Symxn" ], "license": "mit", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:13669", "repo": "ButchersBoy/MaterialDesignInXamlToolkit", "url": "https://github.com/ButchersBoy/MaterialDesignInXamlToolkit/issues/926" }
gharchive/issue
Can't find resource MaterialDesignMultiFloatingActionPopupBox Hello, why is this resource not found? Have you removed it or have it an another name? <materialDesign:PopupBox Style="{StaticResource MaterialDesignMultiFloatingActionPopupBox}"></materialDesign:PopupBox> I use the latest version of "MaterialDesignThemes" (Version <IP_ADDRESS>4). I hope you can help. @Symxn it is still there but it is not one of the default styles you have to include it on the XAML pages where you need it. Just like in the demo app
2025-04-01T04:10:14.412271
2016-04-04T07:34:04
145608278
{ "authors": [ "LeeMcNeil", "andrewkm" ], "license": "mit", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:13670", "repo": "BuycraftPlugin/BuycraftX", "url": "https://github.com/BuycraftPlugin/BuycraftX/issues/43" }
gharchive/issue
Could not fetch due players queue Caught this in console today: http://pastie.org/pastes/10784783/text?key=6a5qkavcvtlrpj0c51xxia Don't worry about socket timeouts like this, it's just the odd occasion where the server cannot connect to the plugin API. On the player checks this wouldn't drastically affect anything.
2025-04-01T04:10:14.453480
2022-11-20T03:06:01
1456747915
{ "authors": [ "C-STYR", "a-hend" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:13671", "repo": "C-STYR/GitHubTrainer", "url": "https://github.com/C-STYR/GitHubTrainer/pull/56" }
gharchive/pull-request
Moved filterArray.js to CSX/Callbacks I'm sure I didn't need a separate branch and PR for this, but I saw it hanging out there and wanted to clean it up. I think I moved this file while I was accidentally in *main and so it was showing up as an untracked file when I ran git status In the future should I just add it and commit it on a branch I'm already working on? yeah, if it's just been orphaned you can add it to any PR, just make mention of it in the comment
2025-04-01T04:10:14.463207
2024-04-09T13:02:24
2233393926
{ "authors": [ "C0Newb", "xitation" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:13672", "repo": "C0Newb/hass-powerpanel", "url": "https://github.com/C0Newb/hass-powerpanel/issues/1" }
gharchive/issue
ERROR: Cannot install pysnmp because these package versions have conflicting dependencies Hi There, I get the following errors when attempting to install your addon, on Home Assistant version 2024.4.2. Apr 09 12:57:55 Xi-Hassio-1 homeassistant[466]: 2024-04-09 22:57:55.584 INFO (SyncWorker_29) [homeassistant.util.package] Attempting install of pysnmp Apr 09 12:58:00 Xi-Hassio-1 homeassistant[466]: 2024-04-09 22:58:00.875 ERROR (SyncWorker_29) [homeassistant.util.package] Unable to install package pysnmp: ERROR: Cannot install pysnmp because these package versions have conflicting dependencies. Apr 09 12:58:00 Xi-Hassio-1 homeassistant[466]: ERROR: ResolutionImpossible: for help visit https://pip.pypa.io/en/latest/topics/dependency-resolution/#dealing-with-dependency-conflicts Apr 09 12:58:00 Xi-Hassio-1 homeassistant[466]: 2024-04-09 22:58:00.875 INFO (SyncWorker_29) [homeassistant.util.package] Attempting install of pysnmp Apr 09 12:58:06 Xi-Hassio-1 homeassistant[466]: 2024-04-09 22:58:06.071 ERROR (SyncWorker_29) [homeassistant.util.package] Unable to install package pysnmp: ERROR: Cannot install pysnmp because these package versions have conflicting dependencies. Apr 09 12:58:06 Xi-Hassio-1 homeassistant[466]: ERROR: ResolutionImpossible: for help visit https://pip.pypa.io/en/latest/topics/dependency-resolution/#dealing-with-dependency-conflicts Apr 09 12:58:06 Xi-Hassio-1 homeassistant[466]: 2024-04-09 22:58:06.072 INFO (SyncWorker_29) [homeassistant.util.package] Attempting install of pysnmp Apr 09 12:58:11 Xi-Hassio-1 homeassistant[466]: 2024-04-09 22:58:11.307 ERROR (SyncWorker_29) [homeassistant.util.package] Unable to install package pysnmp: ERROR: Cannot install pysnmp because these package versions have conflicting dependencies. Apr 09 12:58:11 Xi-Hassio-1 homeassistant[466]: ERROR: ResolutionImpossible: for help visit https://pip.pypa.io/en/latest/topics/dependency-resolution/#dealing-with-dependency-conflicts Apr 09 12:58:11 Xi-Hassio-1 homeassistant[466]: 2024-04-09 22:58:11.309 ERROR (MainThread) [aiohttp.server] Error handling request Apr 09 12:58:11 Xi-Hassio-1 homeassistant[466]: Traceback (most recent call last): Apr 09 12:58:11 Xi-Hassio-1 homeassistant[466]: File "/usr/local/lib/python3.12/site-packages/aiohttp/web_protocol.py", line 452, in _handle_request Apr 09 12:58:11 Xi-Hassio-1 homeassistant[466]: resp = await request_handler(request) Apr 09 12:58:11 Xi-Hassio-1 homeassistant[466]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Apr 09 12:58:11 Xi-Hassio-1 homeassistant[466]: File "/usr/local/lib/python3.12/site-packages/aiohttp/web_app.py", line 543, in _handle Apr 09 12:58:11 Xi-Hassio-1 homeassistant[466]: resp = await handler(request) Apr 09 12:58:11 Xi-Hassio-1 homeassistant[466]: ^^^^^^^^^^^^^^^^^^^^^^ Apr 09 12:58:11 Xi-Hassio-1 homeassistant[466]: File "/usr/local/lib/python3.12/site-packages/aiohttp/web_middlewares.py", line 114, in impl Apr 09 12:58:11 Xi-Hassio-1 homeassistant[466]: return await handler(request) Apr 09 12:58:11 Xi-Hassio-1 homeassistant[466]: ^^^^^^^^^^^^^^^^^^^^^^ Apr 09 12:58:11 Xi-Hassio-1 homeassistant[466]: File "/usr/src/homeassistant/homeassistant/components/http/security_filter.py", line 92, in security_filter_middleware Apr 09 12:58:11 Xi-Hassio-1 homeassistant[466]: return await handler(request) Apr 09 12:58:11 Xi-Hassio-1 homeassistant[466]: ^^^^^^^^^^^^^^^^^^^^^^ Apr 09 12:58:11 Xi-Hassio-1 homeassistant[466]: File "/usr/src/homeassistant/homeassistant/components/http/forwarded.py", line 210, in forwarded_middleware Apr 09 12:58:11 Xi-Hassio-1 homeassistant[466]: return await handler(request) Apr 09 12:58:11 Xi-Hassio-1 homeassistant[466]: ^^^^^^^^^^^^^^^^^^^^^^ Apr 09 12:58:11 Xi-Hassio-1 homeassistant[466]: File "/usr/src/homeassistant/homeassistant/components/http/request_context.py", line 26, in request_context_middleware Apr 09 12:58:11 Xi-Hassio-1 homeassistant[466]: return await handler(request) Apr 09 12:58:11 Xi-Hassio-1 homeassistant[466]: ^^^^^^^^^^^^^^^^^^^^^^ Apr 09 12:58:11 Xi-Hassio-1 homeassistant[466]: File "/usr/src/homeassistant/homeassistant/components/http/ban.py", line 88, in ban_middleware Apr 09 12:58:11 Xi-Hassio-1 homeassistant[466]: return await handler(request) Apr 09 12:58:11 Xi-Hassio-1 homeassistant[466]: ^^^^^^^^^^^^^^^^^^^^^^ Apr 09 12:58:11 Xi-Hassio-1 homeassistant[466]: File "/usr/src/homeassistant/homeassistant/components/http/auth.py", line 236, in auth_middleware Apr 09 12:58:11 Xi-Hassio-1 homeassistant[466]: return await handler(request) Apr 09 12:58:11 Xi-Hassio-1 homeassistant[466]: ^^^^^^^^^^^^^^^^^^^^^^ Apr 09 12:58:11 Xi-Hassio-1 homeassistant[466]: File "/usr/src/homeassistant/homeassistant/components/http/headers.py", line 32, in headers_middleware Apr 09 12:58:11 Xi-Hassio-1 homeassistant[466]: response = await handler(request) Apr 09 12:58:11 Xi-Hassio-1 homeassistant[466]: ^^^^^^^^^^^^^^^^^^^^^^ Apr 09 12:58:11 Xi-Hassio-1 homeassistant[466]: File "/usr/src/homeassistant/homeassistant/helpers/http.py", line 73, in handle Apr 09 12:58:11 Xi-Hassio-1 homeassistant[466]: result = await handler(request, **request.match_info) Apr 09 12:58:11 Xi-Hassio-1 homeassistant[466]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Apr 09 12:58:11 Xi-Hassio-1 homeassistant[466]: File "/usr/src/homeassistant/homeassistant/components/http/decorators.py", line 71, in with_admin Apr 09 12:58:11 Xi-Hassio-1 homeassistant[466]: return await func(self, request, *args, **kwargs) Apr 09 12:58:11 Xi-Hassio-1 homeassistant[466]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Apr 09 12:58:11 Xi-Hassio-1 homeassistant[466]: File "/usr/src/homeassistant/homeassistant/components/http/data_validator.py", line 73, in wrapper Apr 09 12:58:11 Xi-Hassio-1 homeassistant[466]: result = await method(view, request, data, *args, **kwargs) Apr 09 12:58:11 Xi-Hassio-1 homeassistant[466]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Apr 09 12:58:11 Xi-Hassio-1 homeassistant[466]: File "/usr/src/homeassistant/homeassistant/components/config/config_entries.py", line 172, in post Apr 09 12:58:11 Xi-Hassio-1 homeassistant[466]: return await self._post_impl(request, data) Apr 09 12:58:11 Xi-Hassio-1 homeassistant[466]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Apr 09 12:58:11 Xi-Hassio-1 homeassistant[466]: File "/usr/src/homeassistant/homeassistant/components/config/config_entries.py", line 179, in _post_impl Apr 09 12:58:11 Xi-Hassio-1 homeassistant[466]: return await super()._post_impl(request, data) Apr 09 12:58:11 Xi-Hassio-1 homeassistant[466]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Apr 09 12:58:11 Xi-Hassio-1 homeassistant[466]: File "/usr/src/homeassistant/homeassistant/helpers/data_entry_flow.py", line 84, in _post_impl Apr 09 12:58:11 Xi-Hassio-1 homeassistant[466]: result = await self._flow_mgr.async_init( Apr 09 12:58:11 Xi-Hassio-1 homeassistant[466]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Apr 09 12:58:11 Xi-Hassio-1 homeassistant[466]: File "/usr/src/homeassistant/homeassistant/config_entries.py", line 1155, in async_init Apr 09 12:58:11 Xi-Hassio-1 homeassistant[466]: flow, result = await self._async_init(flow_id, handler, context, data) Apr 09 12:58:11 Xi-Hassio-1 homeassistant[466]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Apr 09 12:58:11 Xi-Hassio-1 homeassistant[466]: File "/usr/src/homeassistant/homeassistant/config_entries.py", line 1175, in _async_init Apr 09 12:58:11 Xi-Hassio-1 homeassistant[466]: flow = await self.async_create_flow(handler, context=context, data=data) Apr 09 12:58:11 Xi-Hassio-1 homeassistant[466]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Apr 09 12:58:11 Xi-Hassio-1 homeassistant[466]: File "/usr/src/homeassistant/homeassistant/config_entries.py", line 1312, in async_create_flow Apr 09 12:58:11 Xi-Hassio-1 homeassistant[466]: handler = await _async_get_flow_handler( Apr 09 12:58:11 Xi-Hassio-1 homeassistant[466]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Apr 09 12:58:11 Xi-Hassio-1 homeassistant[466]: File "/usr/src/homeassistant/homeassistant/config_entries.py", line 2608, in _async_get_flow_handler Apr 09 12:58:11 Xi-Hassio-1 homeassistant[466]: await _load_integration(hass, domain, hass_config) Apr 09 12:58:11 Xi-Hassio-1 homeassistant[466]: File "/usr/src/homeassistant/homeassistant/config_entries.py", line 2585, in _load_integration Apr 09 12:58:11 Xi-Hassio-1 homeassistant[466]: await async_process_deps_reqs(hass, hass_config, integration) Apr 09 12:58:11 Xi-Hassio-1 homeassistant[466]: File "/usr/src/homeassistant/homeassistant/setup.py", line 551, in async_process_deps_reqs Apr 09 12:58:11 Xi-Hassio-1 homeassistant[466]: await requirements.async_get_integration_with_requirements( Apr 09 12:58:11 Xi-Hassio-1 homeassistant[466]: File "/usr/src/homeassistant/homeassistant/requirements.py", line 53, in async_get_integration_with_requirements Apr 09 12:58:11 Xi-Hassio-1 homeassistant[466]: return await manager.async_get_integration_with_requirements(domain) Apr 09 12:58:11 Xi-Hassio-1 homeassistant[466]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Apr 09 12:58:11 Xi-Hassio-1 homeassistant[466]: File "/usr/src/homeassistant/homeassistant/requirements.py", line 176, in async_get_integration_with_requirements Apr 09 12:58:11 Xi-Hassio-1 homeassistant[466]: await self._async_process_integration(integration, done) Apr 09 12:58:11 Xi-Hassio-1 homeassistant[466]: File "/usr/src/homeassistant/homeassistant/requirements.py", line 193, in _async_process_integration Apr 09 12:58:11 Xi-Hassio-1 homeassistant[466]: await self.async_process_requirements( Apr 09 12:58:11 Xi-Hassio-1 homeassistant[466]: File "/usr/src/homeassistant/homeassistant/requirements.py", line 280, in async_process_requirements Apr 09 12:58:11 Xi-Hassio-1 homeassistant[466]: await self._async_process_requirements(name, missing) Apr 09 12:58:11 Xi-Hassio-1 homeassistant[466]: File "/usr/src/homeassistant/homeassistant/requirements.py", line 318, in _async_process_requirements Apr 09 12:58:11 Xi-Hassio-1 homeassistant[466]: raise RequirementsNotFound(name, list(failures)) Apr 09 12:58:11 Xi-Hassio-1 homeassistant[466]: homeassistant.requirements.RequirementsNotFound: Requirements for powerpanel not found: ['pysnmp']. Thanks. Looks like my version of HA ships with pysnmp-lextudio # docker exec -it homeassistant bash homeassistant:/config# pip list |grep snmp pysnmp-lextudio 6.0.11 pysnmpcrypto 0.0.4 I updated the deps in your package to request this instead and it appears to resolve the issue. I've lodged a PR for you to consider. Thanks. Xi. Hey! Sorry I didn't realize my notifications where turned off.. Your two issues here are all related to pysnmp. Getting pysnmp to install is a bit of a hassle. I didn't realize pysnmp-lextudio was there, using it will be fine. It does cause your second issue there, however. When we call hlapi.getCmd here we're already running in an event loop, but that method then tries to create it's own event loop for whatever reason. This is causing that second error. Reading https://github.com/home-assistant/core/issues/110100#issuecomment-1989501420, it seems using pysnmp-lextudio==5.0.34 fixes this issue. Try to see if that works for you, and if it does I'll merge it on in. Thanks! And what you entered into the config flow should work, sorry about the text being missing there 😅 Make sure you're using SNMPv1, there's no SNMPv3 support ... yet. Hi @C0Newb, Gave it a crack just now, still throwing an error. Apr 15 04:02:17 Xi-Hassio-1 homeassistant[466]: 2024-04-15 14:02:17.217 INFO (SyncWorker_21) [homeassistant.util.package] Attempting install of pysnmp-lextudio==5.0.34 Apr 15 04:03:07 Xi-Hassio-1 homeassistant[466]: 2024-04-15 14:03:07.794 ERROR (MainThread) [custom_components.powerpanel] Unable to connect to snmp: Traceback (most recent call last): Apr 15 04:03:07 Xi-Hassio-1 homeassistant[466]: File "/config/custom_components/powerpanel/config_flow.py", line 57, in async_step_user Apr 15 04:03:07 Xi-Hassio-1 homeassistant[466]: PowerPanelSnmpMonitor(ipaddress, port, username, scanInterval) Apr 15 04:03:07 Xi-Hassio-1 homeassistant[466]: File "/config/custom_components/powerpanel/sensor.py", line 308, in __init__ Apr 15 04:03:07 Xi-Hassio-1 homeassistant[466]: self.update_stats() # try this to throw error if not working. Apr 15 04:03:07 Xi-Hassio-1 homeassistant[466]: ^^^^^^^^^^^^^^^^^^^ Apr 15 04:03:07 Xi-Hassio-1 homeassistant[466]: File "/config/custom_components/powerpanel/sensor.py", line 424, in update_stats Apr 15 04:03:07 Xi-Hassio-1 homeassistant[466]: data = __class__.get( Apr 15 04:03:07 Xi-Hassio-1 homeassistant[466]: ^^^^^^^^^^^^^^ Apr 15 04:03:07 Xi-Hassio-1 homeassistant[466]: File "/config/custom_components/powerpanel/sensor.py", line 349, in get Apr 15 04:03:07 Xi-Hassio-1 homeassistant[466]: handler = hlapi.getCmd( Apr 15 04:03:07 Xi-Hassio-1 homeassistant[466]: ^^^^^^^^^^^^^ Apr 15 04:03:07 Xi-Hassio-1 homeassistant[466]: File "/usr/local/lib/python3.12/site-packages/pysnmp/hlapi/asyncio/sync/cmdgen.py", line 104, in getCmd Apr 15 04:03:07 Xi-Hassio-1 homeassistant[466]: File "/usr/local/lib/python3.12/asyncio/base_events.py", line 661, in run_until_complete Apr 15 04:03:07 Xi-Hassio-1 homeassistant[466]: self._check_running() Apr 15 04:03:07 Xi-Hassio-1 homeassistant[466]: File "/usr/local/lib/python3.12/asyncio/base_events.py", line 620, in _check_running Apr 15 04:03:07 Xi-Hassio-1 homeassistant[466]: raise RuntimeError('This event loop is already running') Apr 15 04:03:07 Xi-Hassio-1 homeassistant[466]: RuntimeError: This event loop is already running Apr 15 04:03:07 Xi-Hassio-1 homeassistant[466]:
2025-04-01T04:10:14.465745
2013-12-02T03:55:47
23551064
{ "authors": [ "C0deH4cker" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:13673", "repo": "C0deH4cker/SuperCalc", "url": "https://github.com/C0deH4cker/SuperCalc/issues/9" }
gharchive/issue
Incorrect order of precedence with function calls In the alpha branch, the following test case is incorrect: >>> ?? 3(4)^2 ^ ( [a] 3( [0] 4 ) [b] 2 ) (3(4) ^ 2) 144 The correct value is 48 (by evaluating 4^2 first, then multiplying it by 3). This will be a difficult fix. I will likely throw in a specific check in BinOp_eval where a function call is being raised to a power. One unfortunate side effect that will still remain is the verbose output will appear incorrect, but this is less important for now anyways. UPDATE: This bug also applies to unary operators: >>> 4(3)! 479001600 >>> 12! 479001600 >>> 4*(3)! 24 Fixed in 3178de716fadf3a47534a6f95e2164f9d84ca58b
2025-04-01T04:10:14.476205
2022-07-05T12:48:45
1294256451
{ "authors": [ "jonasjucker" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:13674", "repo": "C2SM/spack-c2sm", "url": "https://github.com/C2SM/spack-c2sm/pull/491" }
gharchive/pull-request
[devbuildcosmo] Remove broken mechanism to serialize data in devbuildcosmo The custom command devbuildcosmo always serializes data if +serialize is set. This is counterintuitive, since for installcosmo there is no such thing in place. Additionally, it missed the --specargument, therefore it broken anyway and not used by anyone. We manually call the script in our Jenkins-Plan: spack load $SPACK_SPEC ./cosmo/ACC/test/tools/serialize_cosmo.py -s "$SPACK_SPEC" -b "." launch jenkins --upstream cosmo
2025-04-01T04:10:14.520349
2024-01-03T12:06:02
2063906558
{ "authors": [ "dominichofer", "jenkins-apn", "jonasjucker" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:13675", "repo": "C2SM/spack-c2sm", "url": "https://github.com/C2SM/spack-c2sm/pull/896" }
gharchive/pull-request
[ICON] add variant extra-configure-args Multi-value variant to inject any configure-argument not yet available as variant. Ensure that format is either "--enable-arg" or "--disable-arg". Solves https://github.com/C2SM/spack-c2sm/issues/890 launch jenkins icon tsa :green_circle: unit testTest:green_circle:summary :red_circle: integration testTest:green_circle:icon-spack_info :green_circle:icon-spack_spec :red_circle:icon_extra_configure_args=--disable-new_feature,--enable-old_config_arg-spack_spec :green_circle:dace_icon.-O1-spack_spec :green_circle:icon_serialization=create_claw=std-spack_spec WARNING: Serial tests did not run for system tests balfrin :green_circle: unit testTest:green_circle:summary :red_circle: integration testTest:green_circle:icon-spack_info :green_circle:icon-spack_spec :red_circle:icon_extra_configure_args=--disable-new_feature,--enable-old_config_arg-spack_spec :green_circle:dace_icon.-O1-spack_spec :green_circle:icon_serialization=create_claw=std-spack_spec WARNING: Serial tests did not run for system tests daint :green_circle: unit testTest:green_circle:summary :red_circle: integration testTest:green_circle:icon-spack_info :green_circle:icon-spack_spec :red_circle:icon_extra_configure_args=--disable-new_feature,--enable-old_config_arg-spack_spec :green_circle:dace_icon.-O1-spack_spec :green_circle:icon_serialization=create_claw=std-spack_spec WARNING: Serial tests did not run for system tests launch jenkins icon tsa :green_circle: unit testTest:green_circle:summary :red_circle: integration testTest:green_circle:icon-spack_info :green_circle:icon-spack_spec :red_circle:icon_extra_config_args=--disable-new_feature,--enable-old_config_arg-spack_spec :green_circle:dace_icon.-O1-spack_spec :green_circle:icon_serialization=create_claw=std-spack_spec WARNING: Serial tests did not run for system tests balfrin :green_circle: unit testTest:green_circle:summary :red_circle: integration testTest:green_circle:icon-spack_info :green_circle:icon-spack_spec :red_circle:icon_extra_config_args=--disable-new_feature,--enable-old_config_arg-spack_spec :green_circle:dace_icon.-O1-spack_spec :green_circle:icon_serialization=create_claw=std-spack_spec WARNING: Serial tests did not run for system tests daint :green_circle: unit testTest:green_circle:summary :red_circle: integration testTest:green_circle:icon-spack_info :green_circle:icon-spack_spec :red_circle:icon_extra_config_args=--disable-new_feature,--enable-old_config_arg-spack_spec :green_circle:dace_icon.-O1-spack_spec :green_circle:icon_serialization=create_claw=std-spack_spec WARNING: Serial tests did not run for system tests launch jenkins icon tsa :green_circle: unit testTest:green_circle:summary :green_circle: integration testTest:green_circle:icon-spack_info :green_circle:icon-spack_spec :green_circle:icon_extra-config-args=--disable-new_feature,--enable-old_config_arg-spack_spec :green_circle:dace_icon.-O1-spack_spec :green_circle:icon_serialization=create_claw=std-spack_spec In my opinion this new variant has several design problems that need to be addressed. It's easy to get wrong: icon extra_config_args=--enable-new_faeture will be accepted without a problem, but the desired effect won't occur due to a spelling mistake. icon extra_config_args=--enable-aes and icon +aes will cause an equivalent configuration of icon, but not in the eyes of spack. Because the mechanics of spack are circumnavigated. This variant allows icon extra_config_args=--enable-cdi-pio, which would not add the necessary libs via libs += self.spec['libcdi-pio:fortran'].libs. icon extra_config_args=--enable-emvorado,--disable-mpi is permitted, even though icon +emvorado ~mpi is a conflict. The variant aes becomes redundant, as it is covered by extra_config_args=--enable-aes already. Good points. Problem 2-5 could be prevented by only allowing values of extra-config-args to be non-variant values. i.e. extra-config-args=--enable-aes would raise an error, because already a variant of the package. I can implement a checker for that. I think we can do only little about 1. Spelling mistakes should be discovered by users itself. Also I think we have to loosen a bit here in order to not releasing a new tag of spack-c2sm for each "pure" configure-arg. With "pure" I refer to option that do not need any additional dependencies etc. Something like: class Icon(AutotoolsPackage, CudaPackage): # ... (existing code) variant('extra_configure_arg', default=[], description='Additional configure arguments') def validate_extra_configure_arg(self): for extra_arg in self.spec.variants['extra_configure_arg'].value: if extra_arg.startswith(('--enable-', '--disable-')) and \ extra_arg[len('--enable-'):].split('=')[0] in self.variants: raise error.SpecError(f'The value "{extra_arg}" for the extra_configure_arg variant conflicts ' f'with an existing variant. Choose a different value.') def configure_args(self): # Validate extra_configure_arg self.validate_extra_configure_arg() This doesn't fix the redundancy problem of aes. The variant can still be deleted and icon would build fine with extra_configure_args=--enable-aes. But if a tight coupling is your concern, these variants would make it even looser icon extra_libs=libcdi-pio:fortran extra_dep=infero extra_compiler_flags=-O1,-g,--address-sanitizer omit_compiler_flags=-O3. Of course this is a reduction ad absurdum and the point I'm trying to make is that this PR will only silence the problem for a while. Soon we're going to see the need for extra_compiler_flags and omit_compiler_flags because of debug flags and address- and memory-sanitizers and just like that we're going to end up with lots of complexity with string-parsers and regex. And spack wouldn't even be able to recognize these variants correctly since they are mushed strings. daint :green_circle: unit testTest:green_circle:summary :green_circle: integration testTest:green_circle:icon-spack_info :green_circle:icon-spack_spec :green_circle:icon_extra-config-args=--disable-new_feature,--enable-old_config_arg-spack_spec :green_circle:dace_icon.-O1-spack_spec :green_circle:icon_serialization=create_claw=std-spack_spec :green_circle: system testTest:green_circle:Icon-install_2_6_6_gcc :green_circle:daint_cpu_cce :green_circle:daint_cpu_gcc :green_circle:daint_cpu_nvhpc :green_circle:daint_cpu_nvhpc_out_of_source :green_circle:daint_gpu_nvhpc :green_circle:daint_dsl_nvhpc launch jenkins icon tsa :green_circle: unit testTest:green_circle:summary :green_circle: integration testTest:green_circle:icon-spack_info :green_circle:icon-spack_spec :green_circle:icon_extra-config-args=--disable-new_feature,--enable-old_config_arg-spack_spec :green_circle:dace_icon.-O1-spack_spec :green_circle:icon_serialization=create_claw=std-spack_spec balfrin :green_circle: unit testTest:green_circle:summary :green_circle: integration testTest:green_circle:icon-spack_info :green_circle:icon-spack_spec :green_circle:icon_extra-config-args=--disable-new_feature,--enable-old_config_arg-spack_spec :green_circle:dace_icon.-O1-spack_spec :green_circle:icon_serialization=create_claw=std-spack_spec :green_circle: system testTest:green_circle:Icon-install_2_6_6_gcc :green_circle:Icon-install_2_6_6_nvhpc :green_circle:Icon-install_nwp_gpu @dominichofer I see your point, but I think the scenario you sketch is unrealistic. spack already provides option to inject compiler-flags through fcflags or cflag in spec. It is a clear need from BuildBot to have a less tight coupling of spack-c2sm, i.e allowing the devs to have more freedom with the build. The idea of this variant is that we gain more time until we have to release a new version of spack-c2sm, that then would translate all the extra_config_args into new variants. We discussed this before X-Mas in the spack-meeting and it will make our lives easier. I implemented now an additonal check and warnings that should make user aware if the potential damage this variant can have. @jonasjucker Thanks for the reminder, I forgot that we already discussed this. I have no further concerns. Please ping me again when you're done editing the branch and the tests are green. I will review. daint :green_circle: unit testTest:green_circle:summary :green_circle: integration testTest:green_circle:icon-spack_info :green_circle:icon-spack_spec :green_circle:icon_extra-config-args=--disable-new_feature,--enable-old_config_arg-spack_spec :green_circle:dace_icon.-O1-spack_spec :green_circle:icon_serialization=create_claw=std-spack_spec :green_circle: system testTest:green_circle:Icon-install_2_6_6_gcc :green_circle:daint_cpu_cce :green_circle:daint_cpu_gcc :green_circle:daint_cpu_nvhpc :green_circle:daint_cpu_nvhpc_out_of_source :green_circle:daint_gpu_nvhpc :green_circle:daint_dsl_nvhpc launch jenkins icon balfrin tsa :green_circle: unit testTest:green_circle:summary balfrin :green_circle: unit testTest:green_circle:summary :green_circle: integration testTest:green_circle:icon-spack_info :green_circle:icon-spack_spec :green_circle:icon_extra-config-args=--disable-new_feature,--enable-old_config_arg-spack_spec :green_circle:dace_icon.-O1-spack_spec :green_circle:icon_serialization=create_claw=std-spack_spec :green_circle: system testTest:green_circle:Icon-install_2_6_6_gcc :green_circle:Icon-install_2_6_6_nvhpc :green_circle:Icon-install_nwp_gpu @dominichofer Now the PR is ready for another review. launch jenkins icon tsa balfrin :green_circle: unit testTest:green_circle:summary tsa :green_circle: unit testTest:green_circle:summary :green_circle: integration testTest:green_circle:icon-spack_info :green_circle:icon-spack_spec :green_circle:icon_extra-config-args=--disable-new_feature,--enable-old_config_arg-spack_spec :green_circle:dace_icon.-O1-spack_spec :green_circle:icon_serialization=create_claw=std-spack_spec
2025-04-01T04:10:14.527500
2017-06-19T20:53:16
237020744
{ "authors": [ "C4Framework", "aleph7", "noisyneuron" ], "license": "mit", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:13676", "repo": "C4Labs/C4iOS", "url": "https://github.com/C4Labs/C4iOS/issues/687" }
gharchive/issue
Package installer issue After installing the package, I don't see the project in the xcode window, nor can I seem to find it anywhere.. os x v10.10.5 xcode v7.2.1 What is the default install location? Any suggestions? Hey @noisyneuron I just confirmed on my setup that it works: OSX 10.12.6 Xcode 8.8.3 The last update for the installer involved new code that works only for Swift 3 / Xcode 8+. If you can't update to Sierra / Xc8, you can try: http://www.c4ios.com/C4Installer_1_1_0.pkg This is an older version of the installer that should work on your system. Still no luck... can't find the project anywhere.. If you want to join our slack channel we can chat directly about it, and hopefully I can help you get it running. https://join-c4.herokuapp.com/ Over a year old, closing.
2025-04-01T04:10:14.531305
2022-12-16T23:31:30
1501004506
{ "authors": [ "cuisimon", "jmmAVGO" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:13677", "repo": "CAAPIM/apim-charts", "url": "https://github.com/CAAPIM/apim-charts/pull/179" }
gharchive/pull-request
Parametrize enabling restman in portal ingress Please note that this is for internal use only, basically to be able to run RAT tests on Portal. However, it will expose the functionality to portal customers if merge the PR to develop/portal, then stable. Thus will need portal PMs/POs/Dev to decide on whether or not to include the function to the portal helm chart. @cuisimon Tweak needed: apim-deployment.yaml specifies restman.enabled, but values.yaml specifies restman.enable. It sounded to me from our discussions this week that this would still be a valuable change for us internally. Is this type of Restman installation something that has no value customers? I don't know the product well enough to answer that question.. @Gazza7205 @SatishKoney-BRCM @melil02 - if this approach is fundamentally okay, can you let us know what adjustments are needed? If a new approach is needed, can you help us determine a path forward? If this is something we don't want to deal with in this repo, let's figure that out so this PR can be closed.
2025-04-01T04:10:14.533799
2024-11-24T09:21:09
2687415673
{ "authors": [ "Gorgeous-Patrick", "developStorm" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:13678", "repo": "CAENTainer/GCC-Images", "url": "https://github.com/CAENTainer/GCC-Images/issues/2" }
gharchive/issue
ARM support EECS 281 is thinking about releasing this container image to students. One major roadblock is that it does not support ARM 64 version (MacOS). Reading the commit history I figured out that it was removed intentionally. Was there a particular reason? This has been released to students at least in F22: https://eecs281staff.github.io/eecs281setup/guides/unified. Build ARM native images on x86 machines by using qemu (or attempting some form of cross compilation, which was quite complex when I tried). Currently, building this image natively for x86 already takes hours, and building it in qemu simply takes too much time to be practical. However, it would be feasible now if we had an ARM build machine. Then, we could separately build the image for both platforms natively and publish an umbrella metadata that includes both images. This, however, doesn’t mean that students can’t use this image on ARM. Docker automatically creates containers based on x86 images using qemu. Students need considerably fewer computational resources compared to compiling GCC and other tools during the build process, making it a more likely solution to compromise runtime speed, although it can be significant. OK I see you are right.
2025-04-01T04:10:14.555359
2024-06-06T08:51:37
2337712682
{ "authors": [ "buildmachine-sou-jenkins2", "michael-bryson" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:13679", "repo": "CAFapi/opensuse-tomcat-jre17-image", "url": "https://github.com/CAFapi/opensuse-tomcat-jre17-image/pull/55" }
gharchive/pull-request
US917104: Update to latest base image Ticket: https://internal.almoctane.com/ui/entity-navigation?p=131002/6001&entityType=work_item&id=917104 A developer build has not yet been created for this branch. Click here to go ahead and create the build... CI Build Link: https://sou-jenkins2.swinfra.net/job/CAFapi/job/CAFapi~opensuse-tomcat-jre17-image~US917104~CI
2025-04-01T04:10:14.592977
2021-09-27T17:14:53
1008396969
{ "authors": [ "iskobleva", "sarthakpati" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:13682", "repo": "CBICA/GaNDLF", "url": "https://github.com/CBICA/GaNDLF/pull/216" }
gharchive/pull-request
Fixing error in hausdorff calculation Fixes #215 Proposed Changes added a line (140 in GaNDLF/metrics/segmentation) to iterate over multiple arrays associated with samples in a batch made sure the subject spacing is estimated from the batch (line 141) adjusted squeeze operation note: didn't squash commits Checklist [x] I have read the CONTRIBUTING guide [x] My PR is based from the current GaNDLF master [x] Non-breaking change (would not break existing functionality): if changes breaks current code, please provide as many details as possible [x] Function/class source code documentation added/updated [x] Code has been blacked for style consistency [x] If applicable, version information has been updated in GANDLF/version.py [x] If adding a submodule, add to list of exceptions for black styling in pyproject.toml file [x] Usage documentation has been updated, if appropriate [x] History has been updated, if appropriate [x] Tests added or modified to cover the changes; if coverage is reduced, please give explanation This is addressed in a better way with #217, right, @iskobleva?
2025-04-01T04:10:14.599309
2015-04-22T17:15:27
70183559
{ "authors": [ "CBeTHaX", "kerts93", "t3hk0d3" ], "license": "mit", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:13683", "repo": "CBeTHaX/Skylines-Traffic-Manager", "url": "https://github.com/CBeTHaX/Skylines-Traffic-Manager/issues/4" }
gharchive/issue
3-lane roads outer lane not used On 3-lane roads, it usually refuses to use the most outer lane. (same thing I see with highways and 6-lane roads). Version 1.01rc +1 This is irritating and annoying. Check in 1.04rc it should be better. Just make sure to wait a little so new pathfinding takes place (or clear traffic for faster effect) Looks better indeed
2025-04-01T04:10:14.600270
2023-02-06T14:24:34
1572664589
{ "authors": [ "jgb1128", "mbdowne" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:13684", "repo": "CC-in-the-Cloud/General", "url": "https://github.com/CC-in-the-Cloud/General/issues/59" }
gharchive/issue
References Is [nist_cloud] reference "NIST SP 800-145: The NIST Definition of Cloud Computing"? We are using this reference and this can be closed.
2025-04-01T04:10:14.603216
2023-12-06T19:13:12
2029221687
{ "authors": [ "kelly-sovacool" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:13685", "repo": "CCBR/RENEE", "url": "https://github.com/CCBR/RENEE/issues/52" }
gharchive/issue
singularity pull rate limit Error run into by Ayslan on biowulf the first time he tried to run RENEE: Failed to pull singularity image from docker://nciccbr/ccbr_rseqc_4.0.0:v1.0: FATAL: While making image from oci registry: error fetching image to cache: failed to get checksum for docker://nciccbr/ccbr_rseqc_4.0.0:v1.0: reading manifest v1.0 in docker.io/nciccbr/ccbr_rseqc_4.0.0: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit Implementing the shared SIF cache dir by default should help so users don't have to pull containers already pulled by others. Ultimately the problem was the user was calling renee run once per fastq file instead of passing all fastq files at once. I could see how users may hit the dockerhub rate limit though if trying to run renee for multiple projects simultaneously. If other users run into legitimate rate limit issues due to running renee simultaneously on multiple data sets (rather than a user incorrectly running once per sample), we may want to look into how users can authenticate their dockerhub account on biowulf to increase their rate limit and add instructions to our docs. Until then, this is likely not a true issue.
2025-04-01T04:10:14.610782
2018-05-21T18:02:12
325006409
{ "authors": [ "naved001", "zenhack" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:13686", "repo": "CCI-MOC/hil", "url": "https://github.com/CCI-MOC/hil/issues/1017" }
gharchive/issue
Make CLI output pretty Right now we just dump whatever JSON we receive, which isn't very readable (specially calls like show_node with multiple networks). For some of the stuff, we could use PrettyTable, but I am fine with any other solution that makes it neater than what we currently have. I'm definitely on board with making this look nicer. I'm also not picky about the implementation details.
2025-04-01T04:10:14.618875
2016-08-11T19:07:49
170721918
{ "authors": [ "SahilTikale", "gsilvis", "shwsun", "zenhack" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:13687", "repo": "CCI-MOC/hil", "url": "https://github.com/CCI-MOC/hil/pull/635" }
gharchive/pull-request
fixes the show_node issue and updates the documentation show_node and list_nodes erroneously shared the same URI. This caused show_node to list all nodes. This PR fixes it. Updated the documentation rest_api.md accordingly. You forgot to update the tests. Other than that this looks good. Something I noticed (which isn't part of this pr, since it was already there): we're not actually checking the value of the parameter in the server-side api call; if someone does GET /nodes/banana they'll get the same thing as GET /nodes/all. We probably want to be more strict than this. I think with the schema library you could actually do something along the lines of: Schema({"is_free": Or("free", "all")}) @zenhack That is probably because api.py does not check all at all. def list_nodes(is_free): """List all nodes or all free nodes Returns a JSON array of strings representing a list of nodes. Example: '["node1", "node2", "node3"]' """ if is_free == "free": nodes = model.Node.query.filter_by(project_id=None).all() else: nodes = model.Node.query.all() @SahilTikale since this pr is working against list_nodes can you fix this as well? And change the doc if necessart. I agree with @zenhack about the keywords 'all' and 'free' I attempted to change it, but later postponed thinking it needs its own PR. Will do that once this is merged. I checked the unit tests were passing on my system. this must be CI or stress.py will check and fix that. @shwsun will fix what you suggested. @SahilTikale I think I just tracked down what's wrong here. Github will compare your PR against the newest HIL main repo, and your repo is 11 commits behind the upstream. Fix it then it will be ready to ship. @shwsun, the problem is actually just that he needs to update the tests to match the code; they currently expect list_free_nodes to be at the old path (look at the error log). Pulling in master is good policy anyway. And yes, it's stress.py that's failing. @zenhack @shwsun Fixed the errors, this is ready for merge. Meta: where did this "Ready to merge" label come from? If it were ready to merge, a reviewer would have merged it. "Wating on reviewer" is probably the appropriate label, no? No one has signed off on this so tagging it "Ready to merge" is presumptuous at best. +1 once it passes pep8. (And, I have no clue where that tag came from.) @SahilTikale, fix the indentation issue @gsilvis pointed out and then I'm happy. Indention is fixed. +1 +1, merging. Also, any objections if I just remove that label from the list of available labels? I doesn't seem to have a legitimate use case, and nothing is currently even using it -- we should just be using waiting on * instead.
2025-04-01T04:10:14.662232
2024-07-12T19:00:50
2406175241
{ "authors": [ "chris-kuryak", "victor-chaparro" ], "license": "CC0-1.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:13688", "repo": "CDCgov/prime-reportstream", "url": "https://github.com/CDCgov/prime-reportstream/issues/15168" }
gharchive/issue
Create SRD for Test Message Bank Problem statement To scope the actual work needed to create the Test Message Bank feature, we need to create a Software Requirements Document (SRD). What you need to know Existing SRD Product Brief Requirements for Test Message Bank Discovery Mural with some more information If you need a reference for what an SRD looks like and how to document it, ask Victor or refer to this one from the Platform team Acceptance criteria [x] Document created that contains: [x] Technical requirements [x] Proposed solution [x] Highlight potential technical limitations and hurdles [x] Link document in Product Brief Links section [x] Align with Engagement Tech Lead [x] Align with Engagement engineering team [x] Sync with Product Manager [x] Align/get feedback with Platform Tech Lead Aligned with Arnej during 1x1.
2025-04-01T04:10:14.664232
2024-08-22T00:21:56
2479461609
{ "authors": [ "chris-kuryak", "etanb" ], "license": "CC0-1.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:13689", "repo": "CDCgov/prime-reportstream", "url": "https://github.com/CDCgov/prime-reportstream/issues/15666" }
gharchive/issue
Convert site.json to a JS Module Right now, we keep all of the link data for the site and our tests in a giant json blob here: https://github.com/CDCgov/prime-reportstream/blob/master/frontend-react/src/content/site.json We should modernize and convert this blob into a JS Module so we can export it properly. For example, we have to import this json in an "experimental" way in our e2e tests, which I've shared a screenshot of. We wouldn't get this warning with a proper JS Module import. Team not aligned on solution to the problem. Needs more discussion/direction from a technical lead perspective.
2025-04-01T04:10:14.680189
2021-11-29T14:51:51
1066132668
{ "authors": [ "ahay-agile6", "anshulkumar-usds", "hermanAlexCordero", "loripusey" ], "license": "CC0-1.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:13690", "repo": "CDCgov/prime-reportstream", "url": "https://github.com/CDCgov/prime-reportstream/issues/3209" }
gharchive/issue
Create Metabase Queries to Measure ReportStream Usage​ Problem Statement As the Experience Team, we need to capture metrics for how senders and receivers are using ReportStream so that we can gain insights into usage and user retention and make recommendations for improving ReportStream. Queries we would like Senders & Receivers List of active receivers (done, needs QA) % receivers that are active List of active senders (done, needs QA) % senders that are active Number of active senders (done, needs QA) Number of active receivers by jurisdiction (done, needs QA) Number of facilities that have sent test results from by state Number of active senders by state & county (done, needs QA) Reports Received & Sent Number of reports submitted by active senders by sender & week (using existing query, needs QA) Number of reports sent to active PHDs by PHD & week (using existing query, needs QA) Test Results Received & Sent Number test results submitted by active senders & week (using existing query, needs QA) % active senders sending data by week Number test results received by active PHDs by PHD & week (using existing query, needs QA) Number test results received by active PHDs by PHD & week & feed % active receivers getting data by week @hermanAlexCordero @Adrian-Brewster in what sprint were you planning to look at these and do you have a different ticket off of which you are working? @hermanAlexCordero @Adrian-Brewster Not sure if this is still on your radar, but I have updated & added some new MB queries to the Experience Team Dashboard (see comments in the AC above) cc: @rachelhanster @loripusey I will add the new queries to the set. The first set of queries were being reviewed with Mike before I went on vacation. I'll update the ticket when I'm done with the new queries @loripusey - @hermanAlexCordero is no longer with O&O so I'm removing him from the ticket and the O&O label for now. Some of the queries have been written and now exist on the Experience Team dashboard (mostly the numbers ones) - these probably need to be reviewed for query correctness but also need to be reviewed because some of the fields where we get data are not being consistently used across senders & receivers None of the percentage queries have been created Queries we would like Senders & Receivers List of active receivers (done, needs QA) https://prime.cdc.gov/metabase/question/153-live-stlt-orgs % receivers that are active https://prime.cdc.gov/metabase/question/194-percentage-of-active-receivers-over-total-receivers List of active senders (done, needs QA) https://prime.cdc.gov/metabase/question/154-live-reporting-orgs % senders that are active https://prime.cdc.gov/metabase/question/195-percentage-of-active-senders-over-total-senders Number of active senders (done, needs QA) https://prime.cdc.gov/metabase/question/113-live-organizations Number of active receivers by jurisdiction (done, needs QA) https://prime.cdc.gov/metabase/question/113-live-organizations Reports Received & Sent Number of reports submitted by active senders by sender & week (using existing query, needs QA) Number of reports sent to active PHDs by PHD & week (using existing query, needs QA) https://prime.cdc.gov/metabase/question/21-test-reports-sent-weekly Test Results Received & Sent Number test results submitted by active senders & week (using existing query, needs QA) https://prime.cdc.gov/metabase/question/71-covid-test-results-sent-to-reportstream-per-submitter-per-week % active senders sending data by week https://prime.cdc.gov/metabase/question/196-active-senders-sending-data-by-week Number test results received by active PHDs by PHD & week (using existing query, needs QA) Number test results received by active PHDs by PHD & week & feed https://prime.cdc.gov/metabase/question/198-number-test-results-received-by-active-phds-by-phd-week-feed % active receivers getting data by week https://prime.cdc.gov/metabase/question/199-active-receivers-getting-data-by-week
2025-04-01T04:10:14.684896
2023-05-31T19:15:20
1734846737
{ "authors": [ "JohnNKing" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:13691", "repo": "CDCgov/trusted-intermediary", "url": "https://github.com/CDCgov/trusted-intermediary/issues/356" }
gharchive/issue
Partner PHL Integration Options Backlog Task For MN: Create a LOE matrix of integration options based on the actual constraints of EHR vendors, hospitals and the PHLs to understand the opportunities we can leverage. This can be expanded and used to assess learnings from the pilot and future partner implementations. Completion Criteria [ ] For MN, create a baseline matrix of all potential integration options and technolog(ies) used. This information will complete/validate our assumptions work completed in story #292 Tasks [ ] Arrange further meetings and pathway for async questions - @JohnNKing [ ] Identify potential integration options for MN [ ] Determine subsequent level of effort; to help ensure a realistic timeline [ ] Document MN outreach questions regarding technical requirements [ ] Send these to Natalie for distribution [ ] Understand existing ETOR offering (if present) and how we might build off of it [ ] Understand PHL expectations and what they're envisioning for an ETOR solution [ ] Map data fields for MN to HL7 and FHIR (see Screening Card Data fields spreadsheet) [ ] Identify gaps in knowledge about how HL7 and/or FHIR messages are constructed, mapped, and converted (e.g. provenance) Other Notes Any other notes to help clarify this task for the team Discussed during Sprint Planning -- moving back to needing refinement; perhaps we can make this more generic or apply to our first partner? Since replaced by other stories
2025-04-01T04:10:14.685917
2023-04-21T22:05:08
1679151553
{ "authors": [ "briri" ], "license": "mit", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:13692", "repo": "CDLUC3/dmptool", "url": "https://github.com/CDLUC3/dmptool/pull/464" }
gharchive/pull-request
V4.1.0 beta @terrywbrady this is just some refactoring to consolidate some of the JS and to bring it closer to the way we've done things elsewhere in the DB wrong branch. will resubmit
2025-04-01T04:10:14.694744
2023-09-19T16:25:20
1903341124
{ "authors": [ "CDarts48" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:13693", "repo": "CDarts48/prework-study-guide", "url": "https://github.com/CDarts48/prework-study-guide/issues/2" }
gharchive/issue
CSS CSS As a boot camp student I want the prework notes to be structured on a webpage So that I can easily find and read the information GIVEN a Prework Study Guide website WHEN I view the study guide THEN I see a dark blue header and footer, and four boxes with a shadow Added CSS
2025-04-01T04:10:14.697843
2024-04-05T22:47:55
2228910650
{ "authors": [ "CDrummond", "theone11" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:13694", "repo": "CDrummond/lms-material", "url": "https://github.com/CDrummond/lms-material/issues/866" }
gharchive/issue
Why can't I see the "Albums" icon? I am running the latest LMS (v8.5.0 -<PHONE_NUMBER>) and lms-material plugin 4.4.1 and when I enter the "My Music" section I can't see the "Albums" icon to enter the Albums view You have enabled 'Release type support' in LMS. This then splits (what was) "Albums" into "Albums", "EPs", "Singles", etc. With this enabled it makes no sense to refer to a list of "Albums, EPs, etc." as "Albums" - because it contains more than albums. And if, for example, you only had "EPs" then if "Albums" were used as the title here when you browsed Material would state (e.g.) "100 EPs" - again no sense. For this reason Material changes the string to "Release" - but the view when entered is the same. If you want an explicit "Albums" view that only contains albums you can use "Extended Browse Modes" -e.g.: ...which leads to:
2025-04-01T04:10:15.005582
2021-10-26T11:54:00
1036203205
{ "authors": [ "iGovindY", "michalvasko" ], "license": "BSD-3-Clause", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:13695", "repo": "CESNET/netopeer2", "url": "https://github.com/CESNET/netopeer2/issues/1046" }
gharchive/issue
Non-standard capability support. Hi All, does netopeer2/sysrepo provide a way to add support for non-standard capability ? if yes which version? libyang1 branch or devel?, also how to use such functionality. All I could find out was nc_server_set_capability() which is only used by netopeer for standard capability. if not, is this planned anytime? Hi Michal, my understanding is with yang1.1 support, features are not advertised in hello only capabilities are, for feature, client needs to do get on /modules-state/module after NETCONF establishment right? can netopeer2 advertise feature support in hello also? Well, you will never get YANG 1.1 modules (or their features) in hello because it is defined that way and adding explicit capabilities for them is just trying to bypass the specification. You should rely on ietf-yang-library data instead. I was, obviously, talking about YANG 1.0 modules, they are included in hello with all the enabled features.
2025-04-01T04:10:15.007325
2024-11-01T05:00:46
2628328261
{ "authors": [ "AstaFrode", "httpservlet", "jiuquxzy" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:13696", "repo": "CESSProject/cess-miner", "url": "https://github.com/CESSProject/cess-miner/issues/302" }
gharchive/issue
After successful verification, the idle file did not delete the temporary file for some reason. The total size of the idle file after successful verification should be 16G, but in my miner, it is 32G. For some reason, 16G of temporary files were not deleted, which will affect the subsequent P disk space. This issue has been fixed in the new version: https://github.com/CESSProject/cess_pois/tree/v0.5.17 Will be pushed in the next version of cess-miner Fixed on b929463fc805a92df0b6e6e4db189c693c6d1ea6
2025-04-01T04:10:15.083170
2022-11-02T18:30:42
1433601489
{ "authors": [ "kbeaugrand" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:13697", "repo": "CGI-FR/IoT-Hub-Portal", "url": "https://github.com/CGI-FR/IoT-Hub-Portal/issues/1504" }
gharchive/issue
Bug: System modules URI are empty in the edge model details Expected Behavior The Edge module Image URIs should be filled with the deployment manifest content. Current Behavior Fields are empty. Steps to Reproduce Go to the the Edge model detail Context (Environment) Portal version: 3.4 LoRaWAN Stack version: Logs Additional Information This was an issue in our data
2025-04-01T04:10:15.154012
2021-10-13T21:48:53
1025725673
{ "authors": [ "rudy-patel" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:13698", "repo": "CMPUT301F21T12/habitSmasher", "url": "https://github.com/CMPUT301F21T12/habitSmasher/issues/41" }
gharchive/issue
Update habit list UI to match mockup User statement *As a doer, I would like the habit list to look good. Describe the solution you'd like Make habit list UI match the Figma design, shown below: Development notes Add any other context or screenshots about the feature request here. Completed