Unnamed: 0
int64
0
832k
id
float64
2.49B
32.1B
type
stringclasses
1 value
created_at
stringlengths
19
19
repo
stringlengths
5
112
repo_url
stringlengths
34
141
action
stringclasses
3 values
title
stringlengths
1
757
labels
stringlengths
4
664
body
stringlengths
3
261k
index
stringclasses
10 values
text_combine
stringlengths
96
261k
label
stringclasses
2 values
text
stringlengths
96
232k
binary_label
int64
0
1
81,816
31,713,342,253
IssuesEvent
2023-09-09 14:51:26
FalsehoodMC/Fabrication
https://api.github.com/repos/FalsehoodMC/Fabrication
opened
Crawling still working when gui opening
k: Defect n: Forge s: New
Most control keys stop working when gui opening, for example, if you open your inventory, pressing shift will not make you sneak. But key of crawling(I set it to alt) still working, it means pressing alt can make you crawl even when you open gui.
1.0
Crawling still working when gui opening - Most control keys stop working when gui opening, for example, if you open your inventory, pressing shift will not make you sneak. But key of crawling(I set it to alt) still working, it means pressing alt can make you crawl even when you open gui.
defect
crawling still working when gui opening most control keys stop working when gui opening for example if you open your inventory pressing shift will not make you sneak but key of crawling i set it to alt still working it means pressing alt can make you crawl even when you open gui
1
458,083
13,168,388,193
IssuesEvent
2020-08-11 12:02:36
airshipit/airshipctl
https://api.github.com/repos/airshipit/airshipctl
closed
Determine and implement proper file permissions on Airship2 generated files
enhancement priority/low priority/medium ready for review
**Problem description (if applicable)** Airship2 will be responsible for generating a number of files, some of which will contain secrets. The default permissions set when generated via Go are overly permissive. In Airship1 Pegleg acted as the front door generating files, encrypting files, and generating the genesis bundle. A stance was taken of setting 640 permissions on all files Pegleg created, but no path has yet been identified for Airship2. **Proposed change** Identify types of files that should have restricted file permissions, some possibilities: 1. All files generated by Airship2 2. Files containing secrets generated by Airship2 Identify and implement correct file permissions for these files, such as 0640. **Potential impacts** Restricting file permissions may lead to failures with other components. Use of sudo may be needed, or using a common user for these components.
2.0
Determine and implement proper file permissions on Airship2 generated files - **Problem description (if applicable)** Airship2 will be responsible for generating a number of files, some of which will contain secrets. The default permissions set when generated via Go are overly permissive. In Airship1 Pegleg acted as the front door generating files, encrypting files, and generating the genesis bundle. A stance was taken of setting 640 permissions on all files Pegleg created, but no path has yet been identified for Airship2. **Proposed change** Identify types of files that should have restricted file permissions, some possibilities: 1. All files generated by Airship2 2. Files containing secrets generated by Airship2 Identify and implement correct file permissions for these files, such as 0640. **Potential impacts** Restricting file permissions may lead to failures with other components. Use of sudo may be needed, or using a common user for these components.
non_defect
determine and implement proper file permissions on generated files problem description if applicable will be responsible for generating a number of files some of which will contain secrets the default permissions set when generated via go are overly permissive in pegleg acted as the front door generating files encrypting files and generating the genesis bundle a stance was taken of setting permissions on all files pegleg created but no path has yet been identified for proposed change identify types of files that should have restricted file permissions some possibilities all files generated by files containing secrets generated by identify and implement correct file permissions for these files such as potential impacts restricting file permissions may lead to failures with other components use of sudo may be needed or using a common user for these components
0
276,575
30,488,109,006
IssuesEvent
2023-07-18 05:10:04
oxidecomputer/omicron
https://api.github.com/repos/oxidecomputer/omicron
opened
Add warning to wicket UI if trust quorum is not enabled
security trust quorum wicket
If the user configures a rack with one or two sleds, trust quorum will not be enabled because it is not supported. While technically we could support it with two sleds, this leaves zero fault tolerance and makes little sense. In any case we aren't planning on selling racks to customers with fewer than 16 sleds, and even if they have a handful of defective sleds out of the gate they will still be initializing the rack cluster with more than 2 sleds. So this warning is really to developers and users of evaluation clusters. Note that when trust quorum is disabled, hardcoded input key material is used for key derivation. So this is a security issue, just not one we should ever see in our production configuration.
True
Add warning to wicket UI if trust quorum is not enabled - If the user configures a rack with one or two sleds, trust quorum will not be enabled because it is not supported. While technically we could support it with two sleds, this leaves zero fault tolerance and makes little sense. In any case we aren't planning on selling racks to customers with fewer than 16 sleds, and even if they have a handful of defective sleds out of the gate they will still be initializing the rack cluster with more than 2 sleds. So this warning is really to developers and users of evaluation clusters. Note that when trust quorum is disabled, hardcoded input key material is used for key derivation. So this is a security issue, just not one we should ever see in our production configuration.
non_defect
add warning to wicket ui if trust quorum is not enabled if the user configures a rack with one or two sleds trust quorum will not be enabled because it is not supported while technically we could support it with two sleds this leaves zero fault tolerance and makes little sense in any case we aren t planning on selling racks to customers with fewer than sleds and even if they have a handful of defective sleds out of the gate they will still be initializing the rack cluster with more than sleds so this warning is really to developers and users of evaluation clusters note that when trust quorum is disabled hardcoded input key material is used for key derivation so this is a security issue just not one we should ever see in our production configuration
0
297,049
9,159,959,681
IssuesEvent
2019-03-01 05:12:45
puneet-tm/wirelessone-support
https://api.github.com/repos/puneet-tm/wirelessone-support
closed
Notified by user: RADWIN JET HBS UTILISATION NOT GETTING CAPTURED ON W1
High Priority Resolved bug
Radwin5K-JET utilization is not getting captured on W1 JET HBS: 10.194.120.3
1.0
Notified by user: RADWIN JET HBS UTILISATION NOT GETTING CAPTURED ON W1 - Radwin5K-JET utilization is not getting captured on W1 JET HBS: 10.194.120.3
non_defect
notified by user radwin jet hbs utilisation not getting captured on jet utilization is not getting captured on jet hbs
0
72,983
24,393,709,768
IssuesEvent
2022-10-04 17:17:24
decentraland/unity-renderer
https://api.github.com/repos/decentraland/unity-renderer
opened
PlayerName Update Improvement
defect good first issue
![image.png](https://images.zenhubusercontent.com/61080c59fd1a374ce3f3b027/900ba265-4b48-46be-9b37-55e9dcfe8a5a) When we have 100 avatars, the PlayerName is updating its alpha every frame even if the alpha didn't change. - [ ] We can gain a slight performance boost if we ignore all updates if the alpha is rounded up for each 0.1 and if the previous alpha is the same as the one we have now. - [ ] We can also remove the LinQ since its allocating memory.
1.0
PlayerName Update Improvement - ![image.png](https://images.zenhubusercontent.com/61080c59fd1a374ce3f3b027/900ba265-4b48-46be-9b37-55e9dcfe8a5a) When we have 100 avatars, the PlayerName is updating its alpha every frame even if the alpha didn't change. - [ ] We can gain a slight performance boost if we ignore all updates if the alpha is rounded up for each 0.1 and if the previous alpha is the same as the one we have now. - [ ] We can also remove the LinQ since its allocating memory.
defect
playername update improvement when we have avatars the playername is updating its alpha every frame even if the alpha didn t change we can gain a slight performance boost if we ignore all updates if the alpha is rounded up for each and if the previous alpha is the same as the one we have now we can also remove the linq since its allocating memory
1
55,593
14,579,083,357
IssuesEvent
2020-12-18 06:32:12
scipy/scipy
https://api.github.com/repos/scipy/scipy
closed
GitHub Actions test failures for MacOS
CI defect
For a while now the tests run via GitHub Actions on MacOS have been failing to build SciPy. The error in the log is error: library mach has Fortran sources but no Fortran compiler found It turns out there is a bug in `.github/workflows/macos.yml` that results in`gfortran` not being in the search path. I fixed that in gh-13256, and in that PR, the build of SciPy completes, but then a segmentation fault occurs when the tests are run.
1.0
GitHub Actions test failures for MacOS - For a while now the tests run via GitHub Actions on MacOS have been failing to build SciPy. The error in the log is error: library mach has Fortran sources but no Fortran compiler found It turns out there is a bug in `.github/workflows/macos.yml` that results in`gfortran` not being in the search path. I fixed that in gh-13256, and in that PR, the build of SciPy completes, but then a segmentation fault occurs when the tests are run.
defect
github actions test failures for macos for a while now the tests run via github actions on macos have been failing to build scipy the error in the log is error library mach has fortran sources but no fortran compiler found it turns out there is a bug in github workflows macos yml that results in gfortran not being in the search path i fixed that in gh and in that pr the build of scipy completes but then a segmentation fault occurs when the tests are run
1
60,404
17,023,416,739
IssuesEvent
2021-07-03 01:55:13
tomhughes/trac-tickets
https://api.github.com/repos/tomhughes/trac-tickets
closed
Search form not translated in some cases
Component: website Priority: minor Resolution: fixed Type: defect
**[Submitted to the original trac issue database at 5.22am, Wednesday, 3rd June 2009]** When going from map view tab to export tab the search form is translated ok on both. When going from history tab to export tab the search form shows untranslated. Same happens with Map key in main menu (the rest of the menu is translated nicely) and "The Free Wiki World Map" under the logo.
1.0
Search form not translated in some cases - **[Submitted to the original trac issue database at 5.22am, Wednesday, 3rd June 2009]** When going from map view tab to export tab the search form is translated ok on both. When going from history tab to export tab the search form shows untranslated. Same happens with Map key in main menu (the rest of the menu is translated nicely) and "The Free Wiki World Map" under the logo.
defect
search form not translated in some cases when going from map view tab to export tab the search form is translated ok on both when going from history tab to export tab the search form shows untranslated same happens with map key in main menu the rest of the menu is translated nicely and the free wiki world map under the logo
1
34,946
14,546,677,033
IssuesEvent
2020-12-15 21:37:41
devssa/onde-codar-em-salvador
https://api.github.com/repos/devssa/onde-codar-em-salvador
opened
[TECH MANAGER] [REMOTO] [TAMBEM PCD] Tech Manager | Social na [PICPAY]
API GATEWAY APIs CONTAINER GESTAO DE PESSOAS GESTÃO DE PROJETOS HELP WANTED METODOLOGIAS ÁGEIS MICROSERVICES MOBILE REMOTO TECH MANAGER VAGA PARA PCD TAMBÉM
<!-- ================================================== POR FAVOR, SÓ POSTE SE A VAGA FOR PARA SALVADOR E CIDADES VIZINHAS! Use: "Desenvolvedor Front-end" ao invés de "Front-End Developer" \o/ Exemplo: `[JAVASCRIPT] [MYSQL] [NODE.JS] Desenvolvedor Front-End na [NOME DA EMPRESA]` ================================================== --> ## Descrição da vaga - O Tech Manager é a ponte que une o produto com a parte técnica visando a entrega e acompanhamento dos projetos, provendo a robustez necessária para que tenhamos escalabilidade, estabilidade e segurança para suportar milhares de clientes navegando em nossas plataformas. ### RESPONSABILIDADES E ATRIBUIÇÕES - Sua principal missão é garantir que as ações técnicas se encaixem com as ações globais e vice versa, garantindo que estão sendo tomadas as melhores decisões para o produto no que diz respeito aos sistema e a arquitetura de infraestrutura. Também, é uma de suas missões, dimensionar tamanho e necessidades da equipe, identificando e removendo os impedimentos que não estão ao alcance do próprio time resolver, garantindo visibilidade para dentro e para fora do Squad. - Além disso, é a pessoa que faz a gestão do dia a dia dos desenvolvedores da Squad que atua sendo referência para assuntos como: banco de horas, férias, engajamento e clima, dúvidas do dia a dia e feedbacks situacionais. ## Local - Remoto ## Benefícios - Assistências médica (você e +1 dependente ficam por nossa conta!); - Assistência odontológica; - Seguro de vida; - Vale Transporte e/ou Auxílio combustível; - Vale Refeição e/ou Vale Alimentação; - Vale Cultura; - Gympass; - PicPay Acolhe - Programa que cuida da gente e dos nossos familiares, oferecendo apoio jurídico, social, psicológico e financeiro; - PPR - Participação nos Resultados do PicPay; - Horário flexível e possibilidade de home office. ## Requisitos **Obrigatórios:** - Visão de produto e métricas de negócio; - Gestão de pessoas; - Gestão de times de Desenvolvimento; - APIs, Gateways, Aplicativos Mobile; - Microserviços e Containers; - Ambiente de grande escala; - Métricas e projeções (leadtime, throughput, burnup); - Relação com parceiros externos e internos; - Ferramental de gestão e de visibilidade; - Frameworks ágeis **Diferenciais:** - Arquitetura de Event Streaming; - Java, PHP ou GO; - Protocolos de comunicação instantânea (XMPP, MQTT ou WEBSOCKET). ## Contratação - a combinar ## Nossa empresa - Fazer todos os seus pagamentos de forma simples e rápida. Na nossa plataforma PicPay dá para enviar e receber dinheiro, fazer um Pix, pagar boletos, estabelecimentos, comprar crédito para o celular, para games online e muito mais. Tudo isso direto do celular. Nos últimos anos, o PicPay conquistou milhões de usuários e busca diariamente tornar a vida das pessoas mais fácil. - Contamos com um time de pessoas fantásticas e que amam o que fazem, somos #PicPayLovers. ## Como se candidatar - Por favor envie um email para email@email.com.br com seu CV anexado - enviar no assunto: Vaga NodeJS - [Clique aqui para se candidatar](cole aqui o link para a vaga)
1.0
[TECH MANAGER] [REMOTO] [TAMBEM PCD] Tech Manager | Social na [PICPAY] - <!-- ================================================== POR FAVOR, SÓ POSTE SE A VAGA FOR PARA SALVADOR E CIDADES VIZINHAS! Use: "Desenvolvedor Front-end" ao invés de "Front-End Developer" \o/ Exemplo: `[JAVASCRIPT] [MYSQL] [NODE.JS] Desenvolvedor Front-End na [NOME DA EMPRESA]` ================================================== --> ## Descrição da vaga - O Tech Manager é a ponte que une o produto com a parte técnica visando a entrega e acompanhamento dos projetos, provendo a robustez necessária para que tenhamos escalabilidade, estabilidade e segurança para suportar milhares de clientes navegando em nossas plataformas. ### RESPONSABILIDADES E ATRIBUIÇÕES - Sua principal missão é garantir que as ações técnicas se encaixem com as ações globais e vice versa, garantindo que estão sendo tomadas as melhores decisões para o produto no que diz respeito aos sistema e a arquitetura de infraestrutura. Também, é uma de suas missões, dimensionar tamanho e necessidades da equipe, identificando e removendo os impedimentos que não estão ao alcance do próprio time resolver, garantindo visibilidade para dentro e para fora do Squad. - Além disso, é a pessoa que faz a gestão do dia a dia dos desenvolvedores da Squad que atua sendo referência para assuntos como: banco de horas, férias, engajamento e clima, dúvidas do dia a dia e feedbacks situacionais. ## Local - Remoto ## Benefícios - Assistências médica (você e +1 dependente ficam por nossa conta!); - Assistência odontológica; - Seguro de vida; - Vale Transporte e/ou Auxílio combustível; - Vale Refeição e/ou Vale Alimentação; - Vale Cultura; - Gympass; - PicPay Acolhe - Programa que cuida da gente e dos nossos familiares, oferecendo apoio jurídico, social, psicológico e financeiro; - PPR - Participação nos Resultados do PicPay; - Horário flexível e possibilidade de home office. ## Requisitos **Obrigatórios:** - Visão de produto e métricas de negócio; - Gestão de pessoas; - Gestão de times de Desenvolvimento; - APIs, Gateways, Aplicativos Mobile; - Microserviços e Containers; - Ambiente de grande escala; - Métricas e projeções (leadtime, throughput, burnup); - Relação com parceiros externos e internos; - Ferramental de gestão e de visibilidade; - Frameworks ágeis **Diferenciais:** - Arquitetura de Event Streaming; - Java, PHP ou GO; - Protocolos de comunicação instantânea (XMPP, MQTT ou WEBSOCKET). ## Contratação - a combinar ## Nossa empresa - Fazer todos os seus pagamentos de forma simples e rápida. Na nossa plataforma PicPay dá para enviar e receber dinheiro, fazer um Pix, pagar boletos, estabelecimentos, comprar crédito para o celular, para games online e muito mais. Tudo isso direto do celular. Nos últimos anos, o PicPay conquistou milhões de usuários e busca diariamente tornar a vida das pessoas mais fácil. - Contamos com um time de pessoas fantásticas e que amam o que fazem, somos #PicPayLovers. ## Como se candidatar - Por favor envie um email para email@email.com.br com seu CV anexado - enviar no assunto: Vaga NodeJS - [Clique aqui para se candidatar](cole aqui o link para a vaga)
non_defect
tech manager social na por favor só poste se a vaga for para salvador e cidades vizinhas use desenvolvedor front end ao invés de front end developer o exemplo desenvolvedor front end na descrição da vaga o tech manager é a ponte que une o produto com a parte técnica visando a entrega e acompanhamento dos projetos provendo a robustez necessária para que tenhamos escalabilidade estabilidade e segurança para suportar milhares de clientes navegando em nossas plataformas responsabilidades e atribuições sua principal missão é garantir que as ações técnicas se encaixem com as ações globais e vice versa garantindo que estão sendo tomadas as melhores decisões para o produto no que diz respeito aos sistema e a arquitetura de infraestrutura também é uma de suas missões dimensionar tamanho e necessidades da equipe identificando e removendo os impedimentos que não estão ao alcance do próprio time resolver garantindo visibilidade para dentro e para fora do squad além disso é a pessoa que faz a gestão do dia a dia dos desenvolvedores da squad que atua sendo referência para assuntos como banco de horas férias engajamento e clima dúvidas do dia a dia e feedbacks situacionais local remoto benefícios assistências médica você e dependente ficam por nossa conta assistência odontológica seguro de vida vale transporte e ou auxílio combustível vale refeição e ou vale alimentação vale cultura gympass picpay acolhe programa que cuida da gente e dos nossos familiares oferecendo apoio jurídico social psicológico e financeiro ppr participação nos resultados do picpay horário flexível e possibilidade de home office requisitos obrigatórios visão de produto e métricas de negócio gestão de pessoas gestão de times de desenvolvimento apis gateways aplicativos mobile microserviços e containers ambiente de grande escala métricas e projeções leadtime throughput burnup relação com parceiros externos e internos ferramental de gestão e de visibilidade frameworks ágeis diferenciais arquitetura de event streaming java php ou go protocolos de comunicação instantânea xmpp mqtt ou websocket contratação a combinar nossa empresa fazer todos os seus pagamentos de forma simples e rápida na nossa plataforma picpay dá para enviar e receber dinheiro fazer um pix pagar boletos estabelecimentos comprar crédito para o celular para games online e muito mais tudo isso direto do celular nos últimos anos o picpay conquistou milhões de usuários e busca diariamente tornar a vida das pessoas mais fácil contamos com um time de pessoas fantásticas e que amam o que fazem somos picpaylovers como se candidatar por favor envie um email para email email com br com seu cv anexado enviar no assunto vaga nodejs cole aqui o link para a vaga
0
319,155
23,759,404,813
IssuesEvent
2022-09-01 07:34:29
aws-amplify/amplify-js
https://api.github.com/repos/aws-amplify/amplify-js
closed
AppSync Not Using IAM for Auth
good first issue documentation GraphQL React
### Before opening, please confirm: - [X] I have [searched for duplicate or closed issues](https://github.com/aws-amplify/amplify-js/issues?q=is%3Aissue+) and [discussions](https://github.com/aws-amplify/amplify-js/discussions). - [X] I have read the guide for [submitting bug reports](https://github.com/aws-amplify/amplify-js/blob/main/CONTRIBUTING.md#bug-reports). - [X] I have done my best to include a minimal, self-contained set of instructions for consistently reproducing the issue. ### JavaScript Framework React ### Amplify APIs GraphQL API ### Amplify Categories api ### Environment information <details> ``` System: OS: macOS 12.4 CPU: (10) arm64 Apple M1 Pro Memory: 733.83 MB / 32.00 GB Shell: 5.8.1 - /bin/zsh Binaries: Node: 18.7.0 - ~/.nvm/versions/node/v18.7.0/bin/node npm: 8.15.0 - ~/.nvm/versions/node/v18.7.0/bin/npm Browsers: Chrome: 104.0.5112.101 Firefox: 91.11.0 Safari: 15.5 npmPackages: @aws-amplify/ui-react: ^3.4.1 => 3.4.1 @aws-amplify/ui-react-internal: undefined () @aws-amplify/ui-react-legacy: undefined () @testing-library/jest-dom: ^5.16.5 => 5.16.5 @testing-library/react: ^13.3.0 => 13.3.0 @testing-library/user-event: ^13.5.0 => 13.5.0 aws-amplify: ^4.3.32 => 4.3.32 react: ^18.2.0 => 18.2.0 (18.0.0) react-dom: ^18.2.0 => 18.2.0 react-scripts: 5.0.1 => 5.0.1 web-vitals: ^2.1.4 => 2.1.4 npmGlobalPackages: @aws-amplify/cli: 9.2.1 corepack: 0.12.1 npm: 8.15.0 ``` </details> ### Describe the bug I'm attempting to use IAM as an authentication mechanism for an AppSync/GraphQL API. As background, my React app will interface with another service that will provide temporary IAM credentials. For local testing, I'm attempting to use hardcoded credentials just to make it easier to figure out how to work with Amplify. Even when I have the GraphQL schema set to use `@aws_iam`, the `aws_appsync_authenticationType` set to `AWS_IAM', and `authMode` in the GraphQL API call set to `AWS_IAM`, I'm seeing "No credentials" errors. When I enable debug level logs in my React app, it actually looks like Amplify is trying to use Cognito in some way. ### Expected behavior I expect Amplify to use the credentials provided and to allow the client to invoke my AppSync endpoint without errors. ### Reproduction steps I'm mostly following the beginner tutorial provided in https://aws.amazon.com/getting-started/hands-on/build-react-app-amplify-graphql. ### Code Snippet ```javascript # schema.graphql type Note @model @auth(rules: [{ allow: public, provider: iam }]) @aws_iam { id: ID! name: String! description: String } # App.js import React, { useState, useEffect } from 'react'; import './App.css'; import { API, Amplify, Auth, graphqlOperation } from 'aws-amplify'; import { listNotes } from './graphql/queries'; import { createNote as createNoteMutation, deleteNote as deleteNoteMutation } from './graphql/mutations'; import config from './aws-exports'; import * as subscriptions from './graphql/subscriptions'; import AWS from "aws-sdk"; window.LOG_LEVEL = 'DEBUG'; const creds = { accessKeyId: '<temporary access key>', secretAccessKey: '<temporary secret access key>', sessionToken: "<temporary session token", } AWS.config.credentials = new AWS.Credentials(creds) Auth.currentCredentials() .then(d => console.log('data: ', d)) .catch(e => console.log('error: ', e)) config.aws_appsync_authenticationType = 'AWS_IAM'; Amplify.configure(config); const initialFormState = { name: '', description: '' } function App() { const [notes, setNotes] = useState([]); const [formData, setFormData] = useState(initialFormState); useEffect(() => { fetchNotes(); }, []); async function fetchNotes() { const apiData = await API.graphql({ query: listNotes, authMode: "AWS_IAM" }); setNotes(apiData.data.listNotes.items); } async function createNote() { if (!formData.name || !formData.description) return; const authToken = await getAuthToken() await API.graphql({ query: createNoteMutation, variables: { input: formData }, authMode: "AWS_LAMBDA", authToken: authToken }); setNotes([ ...notes, formData ]); setFormData(initialFormState); } async function deleteNote({ id }) { const authToken = await getAuthToken(); const newNotesArray = notes.filter(note => note.id !== id); setNotes(newNotesArray); await API.graphql({ query: deleteNoteMutation, variables: { input: { id } }, authMode: "AWS_LAMBDA", authToken: authToken }); } return ( <div className="App"> <h1>My Notes App</h1> <input onChange={e => setFormData({ ...formData, 'name': e.target.value})} placeholder="Note name" value={formData.name} /> <input onChange={e => setFormData({ ...formData, 'description': e.target.value})} placeholder="Note description" value={formData.description} /> <button onClick={createNote}>Create Note</button> <div style={{marginBottom: 30}}> { notes.map(note => ( <div key={note.id || note.name}> <h2>{note.name}</h2> <p>{note.description}</p> <button onClick={() => deleteNote(note)}>Delete note</button> </div> )) } </div> </div> ); } export default App; ``` ### Log output <details> ``` [DEBUG] 04:35.347 Credentials - getting credentials ConsoleLogger.ts:115 [DEBUG] 04:35.347 Credentials - picking up credentials ConsoleLogger.ts:115 [DEBUG] 04:35.347 Credentials - getting old cred promise [DEBUG] 04:35.348 AWSAppSyncRealTimeProvider - Authenticating with AWS_LAMBDA ConsoleLogger.ts:115 [DEBUG] 04:35.348 AuthClass - Getting current session ConsoleLogger.ts:115 [DEBUG] 04:35.348 AuthClass - getting guest credentials ConsoleLogger.ts:115 [DEBUG] 04:35.348 Credentials - setting credentials for guest ConsoleLogger.ts:115 [DEBUG] 04:35.348 Credentials - No Cognito Identity pool provided for unauthenticated access GraphQLAPI.ts:154 Uncaught (in promise) Error: No credentials at GraphQLAPIClass.<anonymous> (GraphQLAPI.ts:154:1) at step (reportWebVitals.js:13:1) at Object.next (reportWebVitals.js:13:1) at fulfilled (reportWebVitals.js:13:1) ``` ### aws-exports.js /* eslint-disable */ // WARNING: DO NOT EDIT. This file is automatically generated by AWS Amplify. It will be overwritten. const awsmobile = { "aws_project_region": "us-west-2", "aws_appsync_graphqlEndpoint": "https://inuwoxcofvgjthpdy7mjdqxvjq.appsync-api.us-west-2.amazonaws.com/graphql", "aws_appsync_region": "us-west-2", "aws_appsync_authenticationType": "AWS_LAMBDA" }; export default awsmobile; ### Manual configuration _No response_ ### Additional configuration _No response_ ### Mobile Device _No response_ ### Mobile Operating System _No response_ ### Mobile Browser _No response_ ### Mobile Browser Version _No response_ ### Additional information and screenshots _No response_
1.0
AppSync Not Using IAM for Auth - ### Before opening, please confirm: - [X] I have [searched for duplicate or closed issues](https://github.com/aws-amplify/amplify-js/issues?q=is%3Aissue+) and [discussions](https://github.com/aws-amplify/amplify-js/discussions). - [X] I have read the guide for [submitting bug reports](https://github.com/aws-amplify/amplify-js/blob/main/CONTRIBUTING.md#bug-reports). - [X] I have done my best to include a minimal, self-contained set of instructions for consistently reproducing the issue. ### JavaScript Framework React ### Amplify APIs GraphQL API ### Amplify Categories api ### Environment information <details> ``` System: OS: macOS 12.4 CPU: (10) arm64 Apple M1 Pro Memory: 733.83 MB / 32.00 GB Shell: 5.8.1 - /bin/zsh Binaries: Node: 18.7.0 - ~/.nvm/versions/node/v18.7.0/bin/node npm: 8.15.0 - ~/.nvm/versions/node/v18.7.0/bin/npm Browsers: Chrome: 104.0.5112.101 Firefox: 91.11.0 Safari: 15.5 npmPackages: @aws-amplify/ui-react: ^3.4.1 => 3.4.1 @aws-amplify/ui-react-internal: undefined () @aws-amplify/ui-react-legacy: undefined () @testing-library/jest-dom: ^5.16.5 => 5.16.5 @testing-library/react: ^13.3.0 => 13.3.0 @testing-library/user-event: ^13.5.0 => 13.5.0 aws-amplify: ^4.3.32 => 4.3.32 react: ^18.2.0 => 18.2.0 (18.0.0) react-dom: ^18.2.0 => 18.2.0 react-scripts: 5.0.1 => 5.0.1 web-vitals: ^2.1.4 => 2.1.4 npmGlobalPackages: @aws-amplify/cli: 9.2.1 corepack: 0.12.1 npm: 8.15.0 ``` </details> ### Describe the bug I'm attempting to use IAM as an authentication mechanism for an AppSync/GraphQL API. As background, my React app will interface with another service that will provide temporary IAM credentials. For local testing, I'm attempting to use hardcoded credentials just to make it easier to figure out how to work with Amplify. Even when I have the GraphQL schema set to use `@aws_iam`, the `aws_appsync_authenticationType` set to `AWS_IAM', and `authMode` in the GraphQL API call set to `AWS_IAM`, I'm seeing "No credentials" errors. When I enable debug level logs in my React app, it actually looks like Amplify is trying to use Cognito in some way. ### Expected behavior I expect Amplify to use the credentials provided and to allow the client to invoke my AppSync endpoint without errors. ### Reproduction steps I'm mostly following the beginner tutorial provided in https://aws.amazon.com/getting-started/hands-on/build-react-app-amplify-graphql. ### Code Snippet ```javascript # schema.graphql type Note @model @auth(rules: [{ allow: public, provider: iam }]) @aws_iam { id: ID! name: String! description: String } # App.js import React, { useState, useEffect } from 'react'; import './App.css'; import { API, Amplify, Auth, graphqlOperation } from 'aws-amplify'; import { listNotes } from './graphql/queries'; import { createNote as createNoteMutation, deleteNote as deleteNoteMutation } from './graphql/mutations'; import config from './aws-exports'; import * as subscriptions from './graphql/subscriptions'; import AWS from "aws-sdk"; window.LOG_LEVEL = 'DEBUG'; const creds = { accessKeyId: '<temporary access key>', secretAccessKey: '<temporary secret access key>', sessionToken: "<temporary session token", } AWS.config.credentials = new AWS.Credentials(creds) Auth.currentCredentials() .then(d => console.log('data: ', d)) .catch(e => console.log('error: ', e)) config.aws_appsync_authenticationType = 'AWS_IAM'; Amplify.configure(config); const initialFormState = { name: '', description: '' } function App() { const [notes, setNotes] = useState([]); const [formData, setFormData] = useState(initialFormState); useEffect(() => { fetchNotes(); }, []); async function fetchNotes() { const apiData = await API.graphql({ query: listNotes, authMode: "AWS_IAM" }); setNotes(apiData.data.listNotes.items); } async function createNote() { if (!formData.name || !formData.description) return; const authToken = await getAuthToken() await API.graphql({ query: createNoteMutation, variables: { input: formData }, authMode: "AWS_LAMBDA", authToken: authToken }); setNotes([ ...notes, formData ]); setFormData(initialFormState); } async function deleteNote({ id }) { const authToken = await getAuthToken(); const newNotesArray = notes.filter(note => note.id !== id); setNotes(newNotesArray); await API.graphql({ query: deleteNoteMutation, variables: { input: { id } }, authMode: "AWS_LAMBDA", authToken: authToken }); } return ( <div className="App"> <h1>My Notes App</h1> <input onChange={e => setFormData({ ...formData, 'name': e.target.value})} placeholder="Note name" value={formData.name} /> <input onChange={e => setFormData({ ...formData, 'description': e.target.value})} placeholder="Note description" value={formData.description} /> <button onClick={createNote}>Create Note</button> <div style={{marginBottom: 30}}> { notes.map(note => ( <div key={note.id || note.name}> <h2>{note.name}</h2> <p>{note.description}</p> <button onClick={() => deleteNote(note)}>Delete note</button> </div> )) } </div> </div> ); } export default App; ``` ### Log output <details> ``` [DEBUG] 04:35.347 Credentials - getting credentials ConsoleLogger.ts:115 [DEBUG] 04:35.347 Credentials - picking up credentials ConsoleLogger.ts:115 [DEBUG] 04:35.347 Credentials - getting old cred promise [DEBUG] 04:35.348 AWSAppSyncRealTimeProvider - Authenticating with AWS_LAMBDA ConsoleLogger.ts:115 [DEBUG] 04:35.348 AuthClass - Getting current session ConsoleLogger.ts:115 [DEBUG] 04:35.348 AuthClass - getting guest credentials ConsoleLogger.ts:115 [DEBUG] 04:35.348 Credentials - setting credentials for guest ConsoleLogger.ts:115 [DEBUG] 04:35.348 Credentials - No Cognito Identity pool provided for unauthenticated access GraphQLAPI.ts:154 Uncaught (in promise) Error: No credentials at GraphQLAPIClass.<anonymous> (GraphQLAPI.ts:154:1) at step (reportWebVitals.js:13:1) at Object.next (reportWebVitals.js:13:1) at fulfilled (reportWebVitals.js:13:1) ``` ### aws-exports.js /* eslint-disable */ // WARNING: DO NOT EDIT. This file is automatically generated by AWS Amplify. It will be overwritten. const awsmobile = { "aws_project_region": "us-west-2", "aws_appsync_graphqlEndpoint": "https://inuwoxcofvgjthpdy7mjdqxvjq.appsync-api.us-west-2.amazonaws.com/graphql", "aws_appsync_region": "us-west-2", "aws_appsync_authenticationType": "AWS_LAMBDA" }; export default awsmobile; ### Manual configuration _No response_ ### Additional configuration _No response_ ### Mobile Device _No response_ ### Mobile Operating System _No response_ ### Mobile Browser _No response_ ### Mobile Browser Version _No response_ ### Additional information and screenshots _No response_
non_defect
appsync not using iam for auth before opening please confirm i have and i have read the guide for i have done my best to include a minimal self contained set of instructions for consistently reproducing the issue javascript framework react amplify apis graphql api amplify categories api environment information system os macos cpu apple pro memory mb gb shell bin zsh binaries node nvm versions node bin node npm nvm versions node bin npm browsers chrome firefox safari npmpackages aws amplify ui react aws amplify ui react internal undefined aws amplify ui react legacy undefined testing library jest dom testing library react testing library user event aws amplify react react dom react scripts web vitals npmglobalpackages aws amplify cli corepack npm describe the bug i m attempting to use iam as an authentication mechanism for an appsync graphql api as background my react app will interface with another service that will provide temporary iam credentials for local testing i m attempting to use hardcoded credentials just to make it easier to figure out how to work with amplify even when i have the graphql schema set to use aws iam the aws appsync authenticationtype set to aws iam and authmode in the graphql api call set to aws iam i m seeing no credentials errors when i enable debug level logs in my react app it actually looks like amplify is trying to use cognito in some way expected behavior i expect amplify to use the credentials provided and to allow the client to invoke my appsync endpoint without errors reproduction steps i m mostly following the beginner tutorial provided in code snippet javascript schema graphql type note model auth rules aws iam id id name string description string app js import react usestate useeffect from react import app css import api amplify auth graphqloperation from aws amplify import listnotes from graphql queries import createnote as createnotemutation deletenote as deletenotemutation from graphql mutations import config from aws exports import as subscriptions from graphql subscriptions import aws from aws sdk window log level debug const creds accesskeyid secretaccesskey sessiontoken temporary session token aws config credentials new aws credentials creds auth currentcredentials then d console log data d catch e console log error e config aws appsync authenticationtype aws iam amplify configure config const initialformstate name description function app const usestate const usestate initialformstate useeffect fetchnotes async function fetchnotes const apidata await api graphql query listnotes authmode aws iam setnotes apidata data listnotes items async function createnote if formdata name formdata description return const authtoken await getauthtoken await api graphql query createnotemutation variables input formdata authmode aws lambda authtoken authtoken setnotes setformdata initialformstate async function deletenote id const authtoken await getauthtoken const newnotesarray notes filter note note id id setnotes newnotesarray await api graphql query deletenotemutation variables input id authmode aws lambda authtoken authtoken return my notes app input onchange e setformdata formdata name e target value placeholder note name value formdata name input onchange e setformdata formdata description e target value placeholder note description value formdata description create note notes map note note name note description deletenote note delete note export default app log output credentials getting credentials consolelogger ts credentials picking up credentials consolelogger ts credentials getting old cred promise awsappsyncrealtimeprovider authenticating with aws lambda consolelogger ts authclass getting current session consolelogger ts authclass getting guest credentials consolelogger ts credentials setting credentials for guest consolelogger ts credentials no cognito identity pool provided for unauthenticated access graphqlapi ts uncaught in promise error no credentials at graphqlapiclass graphqlapi ts at step reportwebvitals js at object next reportwebvitals js at fulfilled reportwebvitals js aws exports js eslint disable warning do not edit this file is automatically generated by aws amplify it will be overwritten const awsmobile aws project region us west aws appsync graphqlendpoint aws appsync region us west aws appsync authenticationtype aws lambda export default awsmobile manual configuration no response additional configuration no response mobile device no response mobile operating system no response mobile browser no response mobile browser version no response additional information and screenshots no response
0
569,405
17,013,693,087
IssuesEvent
2021-07-02 09:00:19
webcompat/web-bugs
https://api.github.com/repos/webcompat/web-bugs
closed
bitdefender-antivirus-free.en.uptodown.com - site is not usable
browser-focus-geckoview engine-gecko priority-important
<!-- @browser: Firefox Mobile 89.0 --> <!-- @ua_header: Mozilla/5.0 (Android 10; Mobile; rv:89.0) Gecko/89.0 Firefox/89.0 --> <!-- @reported_with: unknown --> <!-- @public_url: https://github.com/webcompat/web-bugs/issues/78915 --> <!-- @extra_labels: browser-focus-geckoview --> **URL**: https://bitdefender-antivirus-free.en.uptodown.com/android/download **Browser / Version**: Firefox Mobile 89.0 **Operating System**: Android 10 **Tested Another Browser**: Yes Chrome **Problem type**: Site is not usable **Description**: Page not loading correctly **Steps to Reproduce**: Websites are behaving abnormally <details> <summary>Browser Configuration</summary> <ul> <li>None</li> </ul> </details> _From [webcompat.com](https://webcompat.com/) with ❤️_
1.0
bitdefender-antivirus-free.en.uptodown.com - site is not usable - <!-- @browser: Firefox Mobile 89.0 --> <!-- @ua_header: Mozilla/5.0 (Android 10; Mobile; rv:89.0) Gecko/89.0 Firefox/89.0 --> <!-- @reported_with: unknown --> <!-- @public_url: https://github.com/webcompat/web-bugs/issues/78915 --> <!-- @extra_labels: browser-focus-geckoview --> **URL**: https://bitdefender-antivirus-free.en.uptodown.com/android/download **Browser / Version**: Firefox Mobile 89.0 **Operating System**: Android 10 **Tested Another Browser**: Yes Chrome **Problem type**: Site is not usable **Description**: Page not loading correctly **Steps to Reproduce**: Websites are behaving abnormally <details> <summary>Browser Configuration</summary> <ul> <li>None</li> </ul> </details> _From [webcompat.com](https://webcompat.com/) with ❤️_
non_defect
bitdefender antivirus free en uptodown com site is not usable url browser version firefox mobile operating system android tested another browser yes chrome problem type site is not usable description page not loading correctly steps to reproduce websites are behaving abnormally browser configuration none from with ❤️
0
238,652
26,144,136,038
IssuesEvent
2022-12-30 00:15:55
RG4421/bub
https://api.github.com/repos/RG4421/bub
opened
CVE-2022-3064 (High) detected in gopkg.in/yaml.v2-v2.0.0
security vulnerability
## CVE-2022-3064 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>gopkg.in/yaml.v2-v2.0.0</b></p></summary> <p>YAML support for the Go language.</p> <p>Library home page: <a href="https://proxy.golang.org/gopkg.in/yaml.v2/@v/v2.0.0.zip">https://proxy.golang.org/gopkg.in/yaml.v2/@v/v2.0.0.zip</a></p> <p> Dependency Hierarchy: - :x: **gopkg.in/yaml.v2-v2.0.0** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/RG4421/bub/commit/0ff6d6478520d0c82eb1293c7a716d112defa079">0ff6d6478520d0c82eb1293c7a716d112defa079</a></p> <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> Parsing malicious or large YAML documents can consume excessive amounts of CPU or memory. <p>Publish Date: 2022-12-27 <p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2022-3064>CVE-2022-3064</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://pkg.go.dev/vuln/GO-2022-0956">https://pkg.go.dev/vuln/GO-2022-0956</a></p> <p>Release Date: 2022-12-27</p> <p>Fix Resolution: v2.2.4</p> </p> </details> <p></p>
True
CVE-2022-3064 (High) detected in gopkg.in/yaml.v2-v2.0.0 - ## CVE-2022-3064 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>gopkg.in/yaml.v2-v2.0.0</b></p></summary> <p>YAML support for the Go language.</p> <p>Library home page: <a href="https://proxy.golang.org/gopkg.in/yaml.v2/@v/v2.0.0.zip">https://proxy.golang.org/gopkg.in/yaml.v2/@v/v2.0.0.zip</a></p> <p> Dependency Hierarchy: - :x: **gopkg.in/yaml.v2-v2.0.0** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/RG4421/bub/commit/0ff6d6478520d0c82eb1293c7a716d112defa079">0ff6d6478520d0c82eb1293c7a716d112defa079</a></p> <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> Parsing malicious or large YAML documents can consume excessive amounts of CPU or memory. <p>Publish Date: 2022-12-27 <p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2022-3064>CVE-2022-3064</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://pkg.go.dev/vuln/GO-2022-0956">https://pkg.go.dev/vuln/GO-2022-0956</a></p> <p>Release Date: 2022-12-27</p> <p>Fix Resolution: v2.2.4</p> </p> </details> <p></p>
non_defect
cve high detected in gopkg in yaml cve high severity vulnerability vulnerable library gopkg in yaml yaml support for the go language library home page a href dependency hierarchy x gopkg in yaml vulnerable library found in head commit a href found in base branch master vulnerability details parsing malicious or large yaml documents can consume excessive amounts of cpu or memory publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution
0
127,713
12,337,622,669
IssuesEvent
2020-05-14 15:15:02
regolith-linux/regolith-desktop
https://api.github.com/repos/regolith-linux/regolith-desktop
opened
Translate regolith-linux.org to Portuguese
documentation help wanted
The earlier site was translated ([thanks to those that worked on the translation!](https://github.com/regolith-linux/regolith-desktop/issues/144)), but the content was almost entirely re-written and the underlying web framework used to describe content was changed. So, the translation will need to be redone. I wanted to wait until the new site structure and content was relatively stable before asking for another translation. Website repo: https://github.com/regolith-linux/website Default (English) front page: https://github.com/regolith-linux/website/blob/master/content/en/_index.md Languages are added by copying `content/en` to `content/<language>` and replacing the English text with another language. Then, the site configuration is [updated here](https://github.com/regolith-linux/website/blob/master/config.toml#L54-L60) to allow users to select the language from a drop-down at the top of the page.
1.0
Translate regolith-linux.org to Portuguese - The earlier site was translated ([thanks to those that worked on the translation!](https://github.com/regolith-linux/regolith-desktop/issues/144)), but the content was almost entirely re-written and the underlying web framework used to describe content was changed. So, the translation will need to be redone. I wanted to wait until the new site structure and content was relatively stable before asking for another translation. Website repo: https://github.com/regolith-linux/website Default (English) front page: https://github.com/regolith-linux/website/blob/master/content/en/_index.md Languages are added by copying `content/en` to `content/<language>` and replacing the English text with another language. Then, the site configuration is [updated here](https://github.com/regolith-linux/website/blob/master/config.toml#L54-L60) to allow users to select the language from a drop-down at the top of the page.
non_defect
translate regolith linux org to portuguese the earlier site was translated but the content was almost entirely re written and the underlying web framework used to describe content was changed so the translation will need to be redone i wanted to wait until the new site structure and content was relatively stable before asking for another translation website repo default english front page languages are added by copying content en to content and replacing the english text with another language then the site configuration is to allow users to select the language from a drop down at the top of the page
0
216,084
16,740,442,344
IssuesEvent
2021-06-11 09:07:11
nbQA-dev/nbQA
https://api.github.com/repos/nbQA-dev/nbQA
closed
TST Add tests to cover cases where magic replacement raises AssertionError
tests
While handling line magics like `%time` that contains python code, we could run into AssertionError when the regex match fails. Add tests to test those code paths raising `AssertionError`
1.0
TST Add tests to cover cases where magic replacement raises AssertionError - While handling line magics like `%time` that contains python code, we could run into AssertionError when the regex match fails. Add tests to test those code paths raising `AssertionError`
non_defect
tst add tests to cover cases where magic replacement raises assertionerror while handling line magics like time that contains python code we could run into assertionerror when the regex match fails add tests to test those code paths raising assertionerror
0
4,047
2,610,086,544
IssuesEvent
2015-02-26 18:26:18
chrsmith/dsdsdaadf
https://api.github.com/repos/chrsmith/dsdsdaadf
opened
深圳除青春痘要多少钱
auto-migrated Priority-Medium Type-Defect
``` 深圳除青春痘要多少钱【深圳韩方科颜全国热线400-869-1818,24 小时QQ4008691818】深圳韩方科颜专业祛痘连锁机构,机构以韩�� �秘方——韩方科颜这一国妆准字号治疗型权威,祛痘佳品,� ��方科颜专业祛痘连锁机构,采用韩国秘方配合专业“不反弹 ”健康祛痘技术并结合先进“先进豪华彩光”仪,开创国内�� �业治疗粉刺、痤疮签约包治先河,成功消除了许多顾客脸上� ��痘痘。 ``` ----- Original issue reported on code.google.com by `szft...@163.com` on 14 May 2014 at 7:13
1.0
深圳除青春痘要多少钱 - ``` 深圳除青春痘要多少钱【深圳韩方科颜全国热线400-869-1818,24 小时QQ4008691818】深圳韩方科颜专业祛痘连锁机构,机构以韩�� �秘方——韩方科颜这一国妆准字号治疗型权威,祛痘佳品,� ��方科颜专业祛痘连锁机构,采用韩国秘方配合专业“不反弹 ”健康祛痘技术并结合先进“先进豪华彩光”仪,开创国内�� �业治疗粉刺、痤疮签约包治先河,成功消除了许多顾客脸上� ��痘痘。 ``` ----- Original issue reported on code.google.com by `szft...@163.com` on 14 May 2014 at 7:13
defect
深圳除青春痘要多少钱 深圳除青春痘要多少钱【 , 】深圳韩方科颜专业祛痘连锁机构,机构以韩�� �秘方——韩方科颜这一国妆准字号治疗型权威,祛痘佳品,� ��方科颜专业祛痘连锁机构,采用韩国秘方配合专业“不反弹 ”健康祛痘技术并结合先进“先进豪华彩光”仪,开创国内�� �业治疗粉刺、痤疮签约包治先河,成功消除了许多顾客脸上� ��痘痘。 original issue reported on code google com by szft com on may at
1
120,736
15,797,829,261
IssuesEvent
2021-04-02 17:28:47
OperationCode/front-end
https://api.github.com/repos/OperationCode/front-end
closed
Design: Page for /employers
Needs: Design/Mock(s) Type: Feature/Redesign
# Feature Similar to a product demo, an employers demo will be a landing for companies, organizations and government agencies that want to know more about hiring our military veterans, spouses and military family members. ## Why is this feature being added? This feature enables employers to quickly and seamlessly learn more about our organization and determine whether hiring one of our members makes sense. The value add is that CEOs, C-level and hiring managers don't have to wait to get in touch, they can have all the data at their fingertips at an instance and make an informed decision internally before reaching out. We want to limit the warming period as much as possible. We want employers to know the value add before reaching out. ## What should your feature do? - The landing (or popup) will ask for name, title, company, email and phone number to request more information. This data can populate back into our Airtable where the CEO and/or Chief of Staff can make contact and answer their questions and next steps. - The automatic email response will send the recipient our short deck, and a link to a visualization graph of our membership, including (1) total number in our community (via Slack), (2) geographic location (via zip code), (3) military branch, (4) number of employed, (5) number underemployed/unemployed and actively looking for a software developer role, and (6) programming languages. - Internally, we should be able to update this and automate as much as possible. Internally, we should have a special code to access the visualization graph (or make it open) to brief folks when we're giving a talk or in a meeting.
2.0
Design: Page for /employers - # Feature Similar to a product demo, an employers demo will be a landing for companies, organizations and government agencies that want to know more about hiring our military veterans, spouses and military family members. ## Why is this feature being added? This feature enables employers to quickly and seamlessly learn more about our organization and determine whether hiring one of our members makes sense. The value add is that CEOs, C-level and hiring managers don't have to wait to get in touch, they can have all the data at their fingertips at an instance and make an informed decision internally before reaching out. We want to limit the warming period as much as possible. We want employers to know the value add before reaching out. ## What should your feature do? - The landing (or popup) will ask for name, title, company, email and phone number to request more information. This data can populate back into our Airtable where the CEO and/or Chief of Staff can make contact and answer their questions and next steps. - The automatic email response will send the recipient our short deck, and a link to a visualization graph of our membership, including (1) total number in our community (via Slack), (2) geographic location (via zip code), (3) military branch, (4) number of employed, (5) number underemployed/unemployed and actively looking for a software developer role, and (6) programming languages. - Internally, we should be able to update this and automate as much as possible. Internally, we should have a special code to access the visualization graph (or make it open) to brief folks when we're giving a talk or in a meeting.
non_defect
design page for employers feature similar to a product demo an employers demo will be a landing for companies organizations and government agencies that want to know more about hiring our military veterans spouses and military family members why is this feature being added this feature enables employers to quickly and seamlessly learn more about our organization and determine whether hiring one of our members makes sense the value add is that ceos c level and hiring managers don t have to wait to get in touch they can have all the data at their fingertips at an instance and make an informed decision internally before reaching out we want to limit the warming period as much as possible we want employers to know the value add before reaching out what should your feature do the landing or popup will ask for name title company email and phone number to request more information this data can populate back into our airtable where the ceo and or chief of staff can make contact and answer their questions and next steps the automatic email response will send the recipient our short deck and a link to a visualization graph of our membership including total number in our community via slack geographic location via zip code military branch number of employed number underemployed unemployed and actively looking for a software developer role and programming languages internally we should be able to update this and automate as much as possible internally we should have a special code to access the visualization graph or make it open to brief folks when we re giving a talk or in a meeting
0
267,492
8,389,497,810
IssuesEvent
2018-10-09 09:45:23
AGROFIMS/hagrofims
https://api.github.com/repos/AGROFIMS/hagrofims
opened
Add sign to the mandatory fields
enhancement interface low priority
users don't know which fields are mandatory to fill. A red start could help highlighting the mendatory fields.
1.0
Add sign to the mandatory fields - users don't know which fields are mandatory to fill. A red start could help highlighting the mendatory fields.
non_defect
add sign to the mandatory fields users don t know which fields are mandatory to fill a red start could help highlighting the mendatory fields
0
27,719
5,081,418,071
IssuesEvent
2016-12-29 10:14:31
contao/core-bundle
https://api.github.com/repos/contao/core-bundle
closed
The search index can't rebuild in multidomain installation
defect
When trying to rebuild the search index in the backend, all other domains, than the one logged in, can´t be indexed. The reason is `Access-Control-Allow-Origin`. From the chrome dev tools log: ``` XMLHttpRequest cannot load https://test.agoat.info/. Response to preflight request doesn't pass access control check: No 'Access-Control-Allow-Origin' header is present on the requested resource. Origin 'https://contao4.agoat.info' is therefore not allowed access. ``` Can the Access-Control-Allow-Origin Header be added safely?
1.0
The search index can't rebuild in multidomain installation - When trying to rebuild the search index in the backend, all other domains, than the one logged in, can´t be indexed. The reason is `Access-Control-Allow-Origin`. From the chrome dev tools log: ``` XMLHttpRequest cannot load https://test.agoat.info/. Response to preflight request doesn't pass access control check: No 'Access-Control-Allow-Origin' header is present on the requested resource. Origin 'https://contao4.agoat.info' is therefore not allowed access. ``` Can the Access-Control-Allow-Origin Header be added safely?
defect
the search index can t rebuild in multidomain installation when trying to rebuild the search index in the backend all other domains than the one logged in can´t be indexed the reason is access control allow origin from the chrome dev tools log xmlhttprequest cannot load response to preflight request doesn t pass access control check no access control allow origin header is present on the requested resource origin is therefore not allowed access can the access control allow origin header be added safely
1
21,195
3,689,161,370
IssuesEvent
2016-02-25 15:35:20
javaslang/javaslang
https://api.github.com/repos/javaslang/javaslang
closed
Is Traversable#clear() really necessary
design/refactoring
Hi, clear sounds very dirty and mutable to me, is it really necessary? Is it for cases where you want a new empty instance of a Traversable but you don't know what concrete class you have? Would something like newEmpty() or something be better if it's really needed?
1.0
Is Traversable#clear() really necessary - Hi, clear sounds very dirty and mutable to me, is it really necessary? Is it for cases where you want a new empty instance of a Traversable but you don't know what concrete class you have? Would something like newEmpty() or something be better if it's really needed?
non_defect
is traversable clear really necessary hi clear sounds very dirty and mutable to me is it really necessary is it for cases where you want a new empty instance of a traversable but you don t know what concrete class you have would something like newempty or something be better if it s really needed
0
227,296
7,529,568,002
IssuesEvent
2018-04-14 06:39:43
CS2103JAN2018-F11-B4/main
https://api.github.com/repos/CS2103JAN2018-F11-B4/main
closed
App title bar not reflective of app name
priority.high
![image](https://user-images.githubusercontent.com/3660764/38409884-bf520aba-39b5-11e8-9680-aea3174b5700.png) Should be renamed to "Delivery" instead of "Address App" <sub>[original: nus-cs2103-AY1718S2/pe-round1#7]</sub> Issue created by: @jordancjq
1.0
App title bar not reflective of app name - ![image](https://user-images.githubusercontent.com/3660764/38409884-bf520aba-39b5-11e8-9680-aea3174b5700.png) Should be renamed to "Delivery" instead of "Address App" <sub>[original: nus-cs2103-AY1718S2/pe-round1#7]</sub> Issue created by: @jordancjq
non_defect
app title bar not reflective of app name should be renamed to delivery instead of address app issue created by jordancjq
0
54,370
11,220,667,318
IssuesEvent
2020-01-07 16:14:41
atomist/sdm-pack-ecs
https://api.github.com/repos/atomist/sdm-pack-ecs
closed
Code Inspection: Tslint on update_service_merging
code-inspection
### max-file-line-count - [`test/support/ecsDataCallback.test.ts:508`](https://github.com/atomist/sdm-pack-ecs/blob/24c391e66da89f525119c3d79c2e632fb76809bc/test/support/ecsDataCallback.test.ts#L508): _(warn)_ This file has 509 lines, which exceeds the maximum of 500 lines allowed. Consider breaking this file up into smaller parts [atomist:code-inspection:update_service_merging=@atomist/atomist-sdm]
1.0
Code Inspection: Tslint on update_service_merging - ### max-file-line-count - [`test/support/ecsDataCallback.test.ts:508`](https://github.com/atomist/sdm-pack-ecs/blob/24c391e66da89f525119c3d79c2e632fb76809bc/test/support/ecsDataCallback.test.ts#L508): _(warn)_ This file has 509 lines, which exceeds the maximum of 500 lines allowed. Consider breaking this file up into smaller parts [atomist:code-inspection:update_service_merging=@atomist/atomist-sdm]
non_defect
code inspection tslint on update service merging max file line count warn this file has lines which exceeds the maximum of lines allowed consider breaking this file up into smaller parts
0
15,207
2,850,318,034
IssuesEvent
2015-05-31 13:34:21
damonkohler/sl4a
https://api.github.com/repos/damonkohler/sl4a
opened
adb shell python "dlopen libpython2.6.so"
auto-migrated Priority-Medium Type-Defect
_From @GoogleCodeExporter on May 31, 2015 11:31_ ``` ./adb install PythonForAndroid_r4.apk UI execute apk & install python download:http://code.google.com/p/python-for-android/source/browse/python-build/ standalone_python.sh step 1:"#! /bin/sh" replace "#! /system/bin/sh" step 2::"storage: replace "sdcard" step 3:save as python (cmd:./adb root) ./adb push python /system/bin ./adb shell root@android:/ #python ########################################## dlopen libpython2.6.so Python 2.6.2 (r262:71600, Mar 20 2011, 16:54:21) [GCC 4.4.3] on linux-armv7l Type "help", "copyright", "credits" or "license" for more information. >>> ########################################## python does work, but I also get an error: dlopen libpython2.6.so what happen?? ``` Original issue reported on code.google.com by `fihtest...@gmail.com` on 4 Jan 2013 at 7:01 _Copied from original issue: damonkohler/android-scripting#671_
1.0
adb shell python "dlopen libpython2.6.so" - _From @GoogleCodeExporter on May 31, 2015 11:31_ ``` ./adb install PythonForAndroid_r4.apk UI execute apk & install python download:http://code.google.com/p/python-for-android/source/browse/python-build/ standalone_python.sh step 1:"#! /bin/sh" replace "#! /system/bin/sh" step 2::"storage: replace "sdcard" step 3:save as python (cmd:./adb root) ./adb push python /system/bin ./adb shell root@android:/ #python ########################################## dlopen libpython2.6.so Python 2.6.2 (r262:71600, Mar 20 2011, 16:54:21) [GCC 4.4.3] on linux-armv7l Type "help", "copyright", "credits" or "license" for more information. >>> ########################################## python does work, but I also get an error: dlopen libpython2.6.so what happen?? ``` Original issue reported on code.google.com by `fihtest...@gmail.com` on 4 Jan 2013 at 7:01 _Copied from original issue: damonkohler/android-scripting#671_
defect
adb shell python dlopen so from googlecodeexporter on may adb install pythonforandroid apk ui execute apk install python download standalone python sh step bin sh replace system bin sh step storage replace sdcard step save as python cmd adb root adb push python system bin adb shell root android python dlopen so python mar on linux type help copyright credits or license for more information python does work but i also get an error dlopen so what happen original issue reported on code google com by fihtest gmail com on jan at copied from original issue damonkohler android scripting
1
124,117
16,582,229,886
IssuesEvent
2021-05-31 13:24:29
exadg/exadg
https://api.github.com/repos/exadg/exadg
closed
Introduce ExaDG::Grid class?
software design
This issue aims at discussing whether an `ExaDG::Grid` class could be helpful and how this class should look like. In the ExaDG code, we realized that `Triangulation`, `PeriodicFaces`, and `Mapping` are tightly coupled. An idea would be to combine these items to a class `ExaDG::Grid`, with `triangulation`, `periodic_faces`, and `mapping` as public members. - This way, the code redundancy found in all `Driver` classes when constructing the `dealii::Triangulation` depending on the triangulation type (distributed, fully-distributed) could be avoided https://github.com/exadg/exadg/blob/c520cdfa6a491535d61b171e29ccad06f9d1832b/include/exadg/poisson/driver.cpp#L75-L89 - The interface of `Application::create_grid()` could be simplified https://github.com/exadg/exadg/blob/c520cdfa6a491535d61b171e29ccad06f9d1832b/include/exadg/poisson/driver.cpp#L93 - certain `Triangulation` functionality such as creating a fully-distributed triangulation currently leads to code redundancy that should actually only be implemented once https://github.com/exadg/exadg/blob/c520cdfa6a491535d61b171e29ccad06f9d1832b/applications/incompressible_navier_stokes/flow_past_cylinder/application.h#L476-L505 - This new class `ExaDG::Grid` could incorporate the functionality to read a grid from an external file (a grid generated, e.g., via gmsh) so that this functionality only needs to be implemented once in ExaDG. @kronbichler @peterrum @bergbauer Please feel free to share your ideas.
1.0
Introduce ExaDG::Grid class? - This issue aims at discussing whether an `ExaDG::Grid` class could be helpful and how this class should look like. In the ExaDG code, we realized that `Triangulation`, `PeriodicFaces`, and `Mapping` are tightly coupled. An idea would be to combine these items to a class `ExaDG::Grid`, with `triangulation`, `periodic_faces`, and `mapping` as public members. - This way, the code redundancy found in all `Driver` classes when constructing the `dealii::Triangulation` depending on the triangulation type (distributed, fully-distributed) could be avoided https://github.com/exadg/exadg/blob/c520cdfa6a491535d61b171e29ccad06f9d1832b/include/exadg/poisson/driver.cpp#L75-L89 - The interface of `Application::create_grid()` could be simplified https://github.com/exadg/exadg/blob/c520cdfa6a491535d61b171e29ccad06f9d1832b/include/exadg/poisson/driver.cpp#L93 - certain `Triangulation` functionality such as creating a fully-distributed triangulation currently leads to code redundancy that should actually only be implemented once https://github.com/exadg/exadg/blob/c520cdfa6a491535d61b171e29ccad06f9d1832b/applications/incompressible_navier_stokes/flow_past_cylinder/application.h#L476-L505 - This new class `ExaDG::Grid` could incorporate the functionality to read a grid from an external file (a grid generated, e.g., via gmsh) so that this functionality only needs to be implemented once in ExaDG. @kronbichler @peterrum @bergbauer Please feel free to share your ideas.
non_defect
introduce exadg grid class this issue aims at discussing whether an exadg grid class could be helpful and how this class should look like in the exadg code we realized that triangulation periodicfaces and mapping are tightly coupled an idea would be to combine these items to a class exadg grid with triangulation periodic faces and mapping as public members this way the code redundancy found in all driver classes when constructing the dealii triangulation depending on the triangulation type distributed fully distributed could be avoided the interface of application create grid could be simplified certain triangulation functionality such as creating a fully distributed triangulation currently leads to code redundancy that should actually only be implemented once this new class exadg grid could incorporate the functionality to read a grid from an external file a grid generated e g via gmsh so that this functionality only needs to be implemented once in exadg kronbichler peterrum bergbauer please feel free to share your ideas
0
16,516
2,910,016,680
IssuesEvent
2015-06-21 10:07:17
Security-Onion-Solutions/security-onion
https://api.github.com/repos/Security-Onion-Solutions/security-onion
closed
Consider changing default query_timeout in /etc/elsa_web.conf
auto-migrated Priority-Medium Type-Defect
``` Consider changing default query_timeout in /etc/elsa_web.conf ``` Original issue reported on code.google.com by `doug.bu...@gmail.com` on 7 Jan 2014 at 12:03
1.0
Consider changing default query_timeout in /etc/elsa_web.conf - ``` Consider changing default query_timeout in /etc/elsa_web.conf ``` Original issue reported on code.google.com by `doug.bu...@gmail.com` on 7 Jan 2014 at 12:03
defect
consider changing default query timeout in etc elsa web conf consider changing default query timeout in etc elsa web conf original issue reported on code google com by doug bu gmail com on jan at
1
7,578
25,182,668,038
IssuesEvent
2022-11-11 15:02:03
mlcommons/ck
https://api.github.com/repos/mlcommons/ck
closed
Automate creation of the catalog of all CM scripts
enhancement cm-script-automation
We are manually creating a catalog of CM components in our cm-mlops repo: * https://github.com/mlcommons/ck/tree/master/cm-mlops (thanks @arjunsuresh) I think we can generate it automatically if we add a key "catalog":{"group":"Apps", "title":"Image classification onnx"} to the meta.json of each CM script.
1.0
Automate creation of the catalog of all CM scripts - We are manually creating a catalog of CM components in our cm-mlops repo: * https://github.com/mlcommons/ck/tree/master/cm-mlops (thanks @arjunsuresh) I think we can generate it automatically if we add a key "catalog":{"group":"Apps", "title":"Image classification onnx"} to the meta.json of each CM script.
non_defect
automate creation of the catalog of all cm scripts we are manually creating a catalog of cm components in our cm mlops repo thanks arjunsuresh i think we can generate it automatically if we add a key catalog group apps title image classification onnx to the meta json of each cm script
0
69,537
3,305,267,148
IssuesEvent
2015-11-04 03:04:20
DigitalCampus/django-oppia
https://api.github.com/repos/DigitalCampus/django-oppia
opened
Points not getting awarded against a specific course
bug medium priority
the points still get awarded though just not against the actual course
1.0
Points not getting awarded against a specific course - the points still get awarded though just not against the actual course
non_defect
points not getting awarded against a specific course the points still get awarded though just not against the actual course
0
258,473
22,321,951,654
IssuesEvent
2022-06-14 07:20:38
wpeventmanager/wp-event-manager
https://api.github.com/repos/wpeventmanager/wp-event-manager
closed
Duplicate venue & organizer functionlaity is not working
In Testing
Duplicate venue & organizer functionlaity is not working. From Dashboard > Click on the Venue & Organizer > duplicate icon Here below blank page is display ![image](https://user-images.githubusercontent.com/75515088/173499162-a3e58d0e-0c92-493f-b718-cfd4ffb5362b.png)
1.0
Duplicate venue & organizer functionlaity is not working - Duplicate venue & organizer functionlaity is not working. From Dashboard > Click on the Venue & Organizer > duplicate icon Here below blank page is display ![image](https://user-images.githubusercontent.com/75515088/173499162-a3e58d0e-0c92-493f-b718-cfd4ffb5362b.png)
non_defect
duplicate venue organizer functionlaity is not working duplicate venue organizer functionlaity is not working from dashboard click on the venue organizer duplicate icon here below blank page is display
0
11,295
2,648,909,690
IssuesEvent
2015-03-14 11:38:48
dtecta/motion-toolkit
https://api.github.com/repos/dtecta/motion-toolkit
closed
Singleton.hpp missing and missing #include <new> for std::bad_alloc in MemAlloc.cpp
auto-migrated Priority-Medium Type-Defect
``` Compile and see the errors. Attached is a patch that fixes both issues. ``` Original issue reported on code.google.com by `erwin.coumans` on 5 Mar 2015 at 4:25 Attachments: * [patch.diff](https://storage.googleapis.com/google-code-attachments/motion-toolkit/issue-1/comment-0/patch.diff)
1.0
Singleton.hpp missing and missing #include <new> for std::bad_alloc in MemAlloc.cpp - ``` Compile and see the errors. Attached is a patch that fixes both issues. ``` Original issue reported on code.google.com by `erwin.coumans` on 5 Mar 2015 at 4:25 Attachments: * [patch.diff](https://storage.googleapis.com/google-code-attachments/motion-toolkit/issue-1/comment-0/patch.diff)
defect
singleton hpp missing and missing include for std bad alloc in memalloc cpp compile and see the errors attached is a patch that fixes both issues original issue reported on code google com by erwin coumans on mar at attachments
1
78,259
27,397,359,157
IssuesEvent
2023-02-28 20:50:53
dotCMS/core
https://api.github.com/repos/dotCMS/core
closed
[Containers] : Save button is disables under specific circumstances
Type : Defect dotCMS : Content Management Team : Lunik Triage
### Problem Statement The `SAVE` button is not being enabled when trying to add a new Content Type to a new Container. ### Steps to Reproduce 1. Go to the new Containers portlet: http://localhost:8080/dotAdmin/#/containers and create a new Container. 2. Type in "1" in the `Max Contents` field, so that the `Content Type Code` field shows up. 3. Click the "+" button and add the `Contact` content type. 4. Type in any code in there. 5. The `SAVE` button is not enabled. Even if you try to change the Container's name or description, it remains disabled. ### Acceptance Criteria The `SAVE` button must be enabled after any change is made. ### dotCMS Version Latest master. ### Proposed Objective Technical User Experience ### Proposed Priority Priority 2 - Important ### External Links... Slack Conversations, Support Tickets, Figma Designs, etc. Found during IQA of https://github.com/dotCMS/core/issues/23443 ### Assumptions & Initiation Needs If you type in any characters for the default `Activity` content type, then the `SAVE` button is enabled. This is the only way to make it work. So there seems to be a problem with JS events at some point? ### Sub-Tasks & Estimates _No response_
1.0
[Containers] : Save button is disables under specific circumstances - ### Problem Statement The `SAVE` button is not being enabled when trying to add a new Content Type to a new Container. ### Steps to Reproduce 1. Go to the new Containers portlet: http://localhost:8080/dotAdmin/#/containers and create a new Container. 2. Type in "1" in the `Max Contents` field, so that the `Content Type Code` field shows up. 3. Click the "+" button and add the `Contact` content type. 4. Type in any code in there. 5. The `SAVE` button is not enabled. Even if you try to change the Container's name or description, it remains disabled. ### Acceptance Criteria The `SAVE` button must be enabled after any change is made. ### dotCMS Version Latest master. ### Proposed Objective Technical User Experience ### Proposed Priority Priority 2 - Important ### External Links... Slack Conversations, Support Tickets, Figma Designs, etc. Found during IQA of https://github.com/dotCMS/core/issues/23443 ### Assumptions & Initiation Needs If you type in any characters for the default `Activity` content type, then the `SAVE` button is enabled. This is the only way to make it work. So there seems to be a problem with JS events at some point? ### Sub-Tasks & Estimates _No response_
defect
save button is disables under specific circumstances problem statement the save button is not being enabled when trying to add a new content type to a new container steps to reproduce go to the new containers portlet and create a new container type in in the max contents field so that the content type code field shows up click the button and add the contact content type type in any code in there the save button is not enabled even if you try to change the container s name or description it remains disabled acceptance criteria the save button must be enabled after any change is made dotcms version latest master proposed objective technical user experience proposed priority priority important external links slack conversations support tickets figma designs etc found during iqa of assumptions initiation needs if you type in any characters for the default activity content type then the save button is enabled this is the only way to make it work so there seems to be a problem with js events at some point sub tasks estimates no response
1
69,024
22,061,297,404
IssuesEvent
2022-05-30 18:15:24
nanopb/nanopb
https://api.github.com/repos/nanopb/nanopb
closed
Make doesn't regenerate C files when .options file change
Type-Defect Priority-Low FixedInGit
Go to network_server example folder and run `make`, then `touch fileproto.options` and `make` again. Files are not regenerated.
1.0
Make doesn't regenerate C files when .options file change - Go to network_server example folder and run `make`, then `touch fileproto.options` and `make` again. Files are not regenerated.
defect
make doesn t regenerate c files when options file change go to network server example folder and run make then touch fileproto options and make again files are not regenerated
1
277,385
21,041,265,448
IssuesEvent
2022-03-31 12:33:12
jdi-testing/jdn-ai
https://api.github.com/repos/jdi-testing/jdn-ai
closed
Update backlog and QA docs
documentation
- [x] Update backlog - [x] Check test cases and mind map, update if necessary - [x] Check onboarding docs, update if necessary
1.0
Update backlog and QA docs - - [x] Update backlog - [x] Check test cases and mind map, update if necessary - [x] Check onboarding docs, update if necessary
non_defect
update backlog and qa docs update backlog check test cases and mind map update if necessary check onboarding docs update if necessary
0
62,725
17,186,066,077
IssuesEvent
2021-07-16 02:14:49
beefproject/beef
https://api.github.com/repos/beefproject/beef
closed
debug: false causing will exception on first request
Defect Priority Review duplicate issue
When making this configuration option true, when the first request i.e. to hit /ul/panel is made there is an exception. ## Environment *Please identify the environment in which your issue occurred.* 1. latest clones 2. 2.7.2 3. 4. backend (ubuntu 20.04) frontend safari (Mac OS - Big Surr) ## Configuration ## Expected vs. Actual Behaviour **Expected Behaviour:** thin debugging **Actual Behaviour:** exception ## Steps to Reproduce default clone from repo set config value under http http debug: true go to admin ui exception made
1.0
debug: false causing will exception on first request - When making this configuration option true, when the first request i.e. to hit /ul/panel is made there is an exception. ## Environment *Please identify the environment in which your issue occurred.* 1. latest clones 2. 2.7.2 3. 4. backend (ubuntu 20.04) frontend safari (Mac OS - Big Surr) ## Configuration ## Expected vs. Actual Behaviour **Expected Behaviour:** thin debugging **Actual Behaviour:** exception ## Steps to Reproduce default clone from repo set config value under http http debug: true go to admin ui exception made
defect
debug false causing will exception on first request when making this configuration option true when the first request i e to hit ul panel is made there is an exception environment please identify the environment in which your issue occurred latest clones backend ubuntu frontend safari mac os big surr configuration expected vs actual behaviour expected behaviour thin debugging actual behaviour exception steps to reproduce default clone from repo set config value under http http debug true go to admin ui exception made
1
66,989
16,766,754,022
IssuesEvent
2021-06-14 09:47:02
mindee/doctr
https://api.github.com/repos/mindee/doctr
closed
[pytorch] Add PyTorch support
critical module: datasets module: models module: transforms topic: build
This is a tracker for PyTorch integration for DocTR release 0.3.0 - [x] Enable flexible framework selection for build & imports (#306) - [ ] PyTorch mirror of doctr.datasets - [ ] PyTorch mirror of doctr.models (#310) - [ ] PyTorch mirror of doctr.transforms
1.0
[pytorch] Add PyTorch support - This is a tracker for PyTorch integration for DocTR release 0.3.0 - [x] Enable flexible framework selection for build & imports (#306) - [ ] PyTorch mirror of doctr.datasets - [ ] PyTorch mirror of doctr.models (#310) - [ ] PyTorch mirror of doctr.transforms
non_defect
add pytorch support this is a tracker for pytorch integration for doctr release enable flexible framework selection for build imports pytorch mirror of doctr datasets pytorch mirror of doctr models pytorch mirror of doctr transforms
0
49,908
7,545,863,284
IssuesEvent
2018-04-17 23:34:59
aurelia-ui-toolkits/aurelia-materialize-bridge
https://api.github.com/repos/aurelia-ui-toolkits/aurelia-materialize-bridge
closed
Installation issue
needs documentation
Hi. So I copied an older project to start a new one using Aurelia cli. (Don't know why I do this it always fails :) Anyway after correcting things I am having issue with aurelia-materialize-bridge. I have updated to "aurelia-cli": "^0.24.0", "materialize-css": "^0.98.2", "aurelia-materialize-bridge": "^0.28.0", the project starts but when going to the page I get the error vendor-bundle.js:25663 GET http://127.0.0.1:9000/scripts/materialize-bundle.html 404 (Not Found) vendor-bundle.js:1395 Unhandled rejection Error: src/../scripts/materialize-bundle.html HTTP status: 404 at XMLHttpRequest.xhr.onreadystatechange (http://127.0.0.1:9000/scripts/vendor-bundle.js:25649:31) From previous event: at DefaultLoader._import (http://127.0.0.1:9000/scripts/vendor-bundle.js:14203:14) ... I have gone through the installation instructs again and can't see the I am going wrong. Any idea what could cause this ? thanks
1.0
Installation issue - Hi. So I copied an older project to start a new one using Aurelia cli. (Don't know why I do this it always fails :) Anyway after correcting things I am having issue with aurelia-materialize-bridge. I have updated to "aurelia-cli": "^0.24.0", "materialize-css": "^0.98.2", "aurelia-materialize-bridge": "^0.28.0", the project starts but when going to the page I get the error vendor-bundle.js:25663 GET http://127.0.0.1:9000/scripts/materialize-bundle.html 404 (Not Found) vendor-bundle.js:1395 Unhandled rejection Error: src/../scripts/materialize-bundle.html HTTP status: 404 at XMLHttpRequest.xhr.onreadystatechange (http://127.0.0.1:9000/scripts/vendor-bundle.js:25649:31) From previous event: at DefaultLoader._import (http://127.0.0.1:9000/scripts/vendor-bundle.js:14203:14) ... I have gone through the installation instructs again and can't see the I am going wrong. Any idea what could cause this ? thanks
non_defect
installation issue hi so i copied an older project to start a new one using aurelia cli don t know why i do this it always fails anyway after correcting things i am having issue with aurelia materialize bridge i have updated to aurelia cli materialize css aurelia materialize bridge the project starts but when going to the page i get the error vendor bundle js get not found vendor bundle js unhandled rejection error src scripts materialize bundle html http status at xmlhttprequest xhr onreadystatechange from previous event at defaultloader import i have gone through the installation instructs again and can t see the i am going wrong any idea what could cause this thanks
0
47,125
13,056,036,531
IssuesEvent
2020-07-30 03:27:40
icecube-trac/tix2
https://api.github.com/repos/icecube-trac/tix2
closed
test (Trac #30)
Migrated from Trac combo simulation defect
test of coolness Migrated from https://code.icecube.wisc.edu/ticket/30 ```json { "status": "closed", "changetime": "2007-06-04T20:21:15", "description": "test of coolness", "reporter": "bchristy", "cc": "", "resolution": "worksforme", "_ts": "1180988475000000", "component": "combo simulation", "summary": "test", "priority": "normal", "keywords": "test", "time": "2007-06-04T20:20:17", "milestone": "", "owner": "bchristy", "type": "defect" } ```
1.0
test (Trac #30) - test of coolness Migrated from https://code.icecube.wisc.edu/ticket/30 ```json { "status": "closed", "changetime": "2007-06-04T20:21:15", "description": "test of coolness", "reporter": "bchristy", "cc": "", "resolution": "worksforme", "_ts": "1180988475000000", "component": "combo simulation", "summary": "test", "priority": "normal", "keywords": "test", "time": "2007-06-04T20:20:17", "milestone": "", "owner": "bchristy", "type": "defect" } ```
defect
test trac test of coolness migrated from json status closed changetime description test of coolness reporter bchristy cc resolution worksforme ts component combo simulation summary test priority normal keywords test time milestone owner bchristy type defect
1
26,682
4,777,575,308
IssuesEvent
2016-10-27 16:40:17
wheeler-microfluidics/microdrop
https://api.github.com/repos/wheeler-microfluidics/microdrop
opened
Add support for `on_plugin_install.py` file in plugins (Trac #155)
defect dmf_control_board_plugin Incomplete Migration Migrated from Trac
Migrated from http://microfluidics.utoronto.ca/microdrop/ticket/155 ```json { "status": "new", "changetime": "2014-08-19T19:37:47", "description": "{{{\n#!md\n\n - Add support for `on_plugin_install.py` file in plugins.\n - Upon installation of plugin, execute the `on_plugin_install.py` script using the same Python executable used to run MicroDrop.\n - The `on_plugin_install.py` can, *e.g.*, be used to install dependencies using `pip`.\n\n}}}", "reporter": "cfobel", "cc": "", "resolution": "", "_ts": "1408477067467591", "component": "dmf_control_board_plugin", "summary": "Add support for `on_plugin_install.py` file in plugins", "priority": "major", "keywords": "", "version": "0.1", "time": "2014-08-19T19:37:47", "milestone": "", "owner": "", "type": "defect" } ```
1.0
Add support for `on_plugin_install.py` file in plugins (Trac #155) - Migrated from http://microfluidics.utoronto.ca/microdrop/ticket/155 ```json { "status": "new", "changetime": "2014-08-19T19:37:47", "description": "{{{\n#!md\n\n - Add support for `on_plugin_install.py` file in plugins.\n - Upon installation of plugin, execute the `on_plugin_install.py` script using the same Python executable used to run MicroDrop.\n - The `on_plugin_install.py` can, *e.g.*, be used to install dependencies using `pip`.\n\n}}}", "reporter": "cfobel", "cc": "", "resolution": "", "_ts": "1408477067467591", "component": "dmf_control_board_plugin", "summary": "Add support for `on_plugin_install.py` file in plugins", "priority": "major", "keywords": "", "version": "0.1", "time": "2014-08-19T19:37:47", "milestone": "", "owner": "", "type": "defect" } ```
defect
add support for on plugin install py file in plugins trac migrated from json status new changetime description n md n n add support for on plugin install py file in plugins n upon installation of plugin execute the on plugin install py script using the same python executable used to run microdrop n the on plugin install py can e g be used to install dependencies using pip n n reporter cfobel cc resolution ts component dmf control board plugin summary add support for on plugin install py file in plugins priority major keywords version time milestone owner type defect
1
735,647
25,408,100,155
IssuesEvent
2022-11-22 16:42:16
kubernetes-sigs/cluster-api-provider-aws
https://api.github.com/repos/kubernetes-sigs/cluster-api-provider-aws
closed
Invalid annotation key for sigs.k8s.io/cluster-api-provider-aws/v2-last-applied-tags
kind/bug needs-triage needs-priority
/kind bug **What steps did you take and what happened:** Upgraded to v2.0.0 with clusterctl Looked into cluster-api-provider-aws logs. There were issues like: `Invalid value: \"sigs.k8s.io/cluster-api-provider-aws/v2-last-applied-tags\" a qualified name must consist of alphanumeric characters, '-', '_' or '.', and must start and end with an alphanumeric character (e.g. 'MyName', or 'my.name', or '123-abc', regex used for validation is '([A-Za-z0-9][-A-Za-z0-9_.]*)?[A-Za-z0-9]') with an optional DNS subdomain prefix and '/' (e.g. 'example.com/MyName')` **Anything else you would like to add:** The issue is probably caused by two slashes appearing in the new annotations instead of one. This may be appearing when using `additionalTags`. * controllers/awsmachine_tags.go - `TagsLastAppliedAnnotation = "sigs.k8s.io/cluster-api-provider-aws/v2-last-applied-tags"` * pkg/cloud/services/ec2/launchtemplate.go - `TagsLastAppliedAnnotation = "sigs.k8s.io/cluster-api-provider-aws/v2-last-applied-tags"` * controllers/awsmachine_security_groups.go - `SecurityGroupsLastAppliedAnnotation = "sigs.k8s.io/cluster-api-provider-aws/v2-last-applied-security-groups"` **Environment:** - Cluster-api-provider-aws version: 2.00 - Kubernetes version: (use `kubectl version`): 1.23
1.0
Invalid annotation key for sigs.k8s.io/cluster-api-provider-aws/v2-last-applied-tags - /kind bug **What steps did you take and what happened:** Upgraded to v2.0.0 with clusterctl Looked into cluster-api-provider-aws logs. There were issues like: `Invalid value: \"sigs.k8s.io/cluster-api-provider-aws/v2-last-applied-tags\" a qualified name must consist of alphanumeric characters, '-', '_' or '.', and must start and end with an alphanumeric character (e.g. 'MyName', or 'my.name', or '123-abc', regex used for validation is '([A-Za-z0-9][-A-Za-z0-9_.]*)?[A-Za-z0-9]') with an optional DNS subdomain prefix and '/' (e.g. 'example.com/MyName')` **Anything else you would like to add:** The issue is probably caused by two slashes appearing in the new annotations instead of one. This may be appearing when using `additionalTags`. * controllers/awsmachine_tags.go - `TagsLastAppliedAnnotation = "sigs.k8s.io/cluster-api-provider-aws/v2-last-applied-tags"` * pkg/cloud/services/ec2/launchtemplate.go - `TagsLastAppliedAnnotation = "sigs.k8s.io/cluster-api-provider-aws/v2-last-applied-tags"` * controllers/awsmachine_security_groups.go - `SecurityGroupsLastAppliedAnnotation = "sigs.k8s.io/cluster-api-provider-aws/v2-last-applied-security-groups"` **Environment:** - Cluster-api-provider-aws version: 2.00 - Kubernetes version: (use `kubectl version`): 1.23
non_defect
invalid annotation key for sigs io cluster api provider aws last applied tags kind bug what steps did you take and what happened upgraded to with clusterctl looked into cluster api provider aws logs there were issues like invalid value sigs io cluster api provider aws last applied tags a qualified name must consist of alphanumeric characters or and must start and end with an alphanumeric character e g myname or my name or abc regex used for validation is with an optional dns subdomain prefix and e g example com myname anything else you would like to add the issue is probably caused by two slashes appearing in the new annotations instead of one this may be appearing when using additionaltags controllers awsmachine tags go tagslastappliedannotation sigs io cluster api provider aws last applied tags pkg cloud services launchtemplate go tagslastappliedannotation sigs io cluster api provider aws last applied tags controllers awsmachine security groups go securitygroupslastappliedannotation sigs io cluster api provider aws last applied security groups environment cluster api provider aws version kubernetes version use kubectl version
0
100,504
4,097,613,295
IssuesEvent
2016-06-03 02:50:54
infiniteautomation/ma-core-public
https://api.github.com/repos/infiniteautomation/ma-core-public
opened
Ordered Task Queue Sizes
High Priority Item New Feature
Tasks are queued based on ID. If this queue gets full then tasks are rejected. In the legacy Timer system nothing was ever rejected for this reason as it was not possible. We must decide on queue sizes for all task types or some way to adjust the queue sizes during runtime to avoid rejections (unless rejections are desired as is the case in something like a backup). We could dynamically expand queue sizes if tasks are rejected, although this would require some tasks to get rejected. Since we know the type of rejection we could keep increasing the size of the queue so long as the pool isn't rejecting tasks. Note that point event notify work items are at high risk here since a data point can have many listeners ...
1.0
Ordered Task Queue Sizes - Tasks are queued based on ID. If this queue gets full then tasks are rejected. In the legacy Timer system nothing was ever rejected for this reason as it was not possible. We must decide on queue sizes for all task types or some way to adjust the queue sizes during runtime to avoid rejections (unless rejections are desired as is the case in something like a backup). We could dynamically expand queue sizes if tasks are rejected, although this would require some tasks to get rejected. Since we know the type of rejection we could keep increasing the size of the queue so long as the pool isn't rejecting tasks. Note that point event notify work items are at high risk here since a data point can have many listeners ...
non_defect
ordered task queue sizes tasks are queued based on id if this queue gets full then tasks are rejected in the legacy timer system nothing was ever rejected for this reason as it was not possible we must decide on queue sizes for all task types or some way to adjust the queue sizes during runtime to avoid rejections unless rejections are desired as is the case in something like a backup we could dynamically expand queue sizes if tasks are rejected although this would require some tasks to get rejected since we know the type of rejection we could keep increasing the size of the queue so long as the pool isn t rejecting tasks note that point event notify work items are at high risk here since a data point can have many listeners
0
97,419
3,992,682,797
IssuesEvent
2016-05-10 03:29:56
10up/ElasticPress
https://api.github.com/repos/10up/ElasticPress
opened
Admin GUI Usability Issues
bug high priority
This is an ongoing list of admin GUI usability issues: 1. If no index has occured, the following message is shown: `ElasticPress is not enabled and cannot override WP queries. You can activate it on the form to the left.`. I spent hours trying to understand that message just to find out I needed to index. This is not an acceptable experience.
1.0
Admin GUI Usability Issues - This is an ongoing list of admin GUI usability issues: 1. If no index has occured, the following message is shown: `ElasticPress is not enabled and cannot override WP queries. You can activate it on the form to the left.`. I spent hours trying to understand that message just to find out I needed to index. This is not an acceptable experience.
non_defect
admin gui usability issues this is an ongoing list of admin gui usability issues if no index has occured the following message is shown elasticpress is not enabled and cannot override wp queries you can activate it on the form to the left i spent hours trying to understand that message just to find out i needed to index this is not an acceptable experience
0
372,393
11,013,648,867
IssuesEvent
2019-12-04 20:56:20
gravityview/GravityView
https://api.github.com/repos/gravityview/GravityView
opened
Switch lightbox script to one that supports secure links
Core: Frontend Difficulty: Medium Priority: High
Our lightbox functionality doesn't work properly out of the box because we're using an ancient script that is bundled with WordPress. It's time to move on. - [ ] Pick the library to use - [ ] Provide advance warning to developers that the switch will be made - [ ] Rename the `gravity_view_lightbox_script` and `gravity_view_lightbox_style` filters for goodness' sake - [ ] Update documentation https://docs.gravityview.co/search?query=thickbox This is broken. Let's un-break it. @gravityview/core Please add thoughts, comments, favorite lightbox scripts, etc. below.
1.0
Switch lightbox script to one that supports secure links - Our lightbox functionality doesn't work properly out of the box because we're using an ancient script that is bundled with WordPress. It's time to move on. - [ ] Pick the library to use - [ ] Provide advance warning to developers that the switch will be made - [ ] Rename the `gravity_view_lightbox_script` and `gravity_view_lightbox_style` filters for goodness' sake - [ ] Update documentation https://docs.gravityview.co/search?query=thickbox This is broken. Let's un-break it. @gravityview/core Please add thoughts, comments, favorite lightbox scripts, etc. below.
non_defect
switch lightbox script to one that supports secure links our lightbox functionality doesn t work properly out of the box because we re using an ancient script that is bundled with wordpress it s time to move on pick the library to use provide advance warning to developers that the switch will be made rename the gravity view lightbox script and gravity view lightbox style filters for goodness sake update documentation this is broken let s un break it gravityview core please add thoughts comments favorite lightbox scripts etc below
0
103,575
8,922,161,711
IssuesEvent
2019-01-21 12:10:38
wellcometrust/wellcomecollection.org
https://api.github.com/repos/wellcometrust/wellcomecollection.org
closed
A/B testing works single column page
a/b test research
Summary card of test: ![](https://user-images.githubusercontent.com/26549021/51183438-14533000-18c9-11e9-8ce5-923940cf3f52.png) requirements: - [x] sample size calc - [x] 'pagestate' event needs correcting - [x] traffic splitting in code - [x] dashboard to monitor results
1.0
A/B testing works single column page - Summary card of test: ![](https://user-images.githubusercontent.com/26549021/51183438-14533000-18c9-11e9-8ce5-923940cf3f52.png) requirements: - [x] sample size calc - [x] 'pagestate' event needs correcting - [x] traffic splitting in code - [x] dashboard to monitor results
non_defect
a b testing works single column page summary card of test requirements sample size calc pagestate event needs correcting traffic splitting in code dashboard to monitor results
0
416,657
28,094,188,859
IssuesEvent
2023-03-30 14:47:39
observablehq/plot
https://api.github.com/repos/observablehq/plot
closed
The grid scale option needs documentation & tests
documentation
since #1197 we can pass a color string, a number, an interval… a the scale's *grid* option. see ts documentation in #1371
1.0
The grid scale option needs documentation & tests - since #1197 we can pass a color string, a number, an interval… a the scale's *grid* option. see ts documentation in #1371
non_defect
the grid scale option needs documentation tests since we can pass a color string a number an interval… a the scale s grid option see ts documentation in
0
308,606
23,256,715,153
IssuesEvent
2022-08-04 09:54:11
mercedes-benz/sechub
https://api.github.com/repos/mercedes-benz/sechub
opened
getting-started guide needs update regarding golang
documentation client
## Problem: Following the Quickstart guide, the compilation of the SecHub client will fail because the go version of Ubuntu. https://mercedes-benz.github.io/sechub/latest/sechub-quickstart-guide.html ## Todo: Describe how to download the latest go version and do not use the distribution's golang package.
1.0
getting-started guide needs update regarding golang - ## Problem: Following the Quickstart guide, the compilation of the SecHub client will fail because the go version of Ubuntu. https://mercedes-benz.github.io/sechub/latest/sechub-quickstart-guide.html ## Todo: Describe how to download the latest go version and do not use the distribution's golang package.
non_defect
getting started guide needs update regarding golang problem following the quickstart guide the compilation of the sechub client will fail because the go version of ubuntu todo describe how to download the latest go version and do not use the distribution s golang package
0
72,075
23,922,103,508
IssuesEvent
2022-09-09 18:00:09
openzfs/zfs
https://api.github.com/repos/openzfs/zfs
opened
syncfs(3) hangs until the end of the next TXG
Type: Defect
<!-- Please fill out the following template, which will help other contributors address your issue. --> <!-- Thank you for reporting an issue. *IMPORTANT* - Please check our issue tracker before opening a new issue. Additional valuable information can be found in the OpenZFS documentation and mailing list archives. Please fill in as much of the template as possible. --> ### System information <!-- add version after "|" character --> Type | Version/Name --- | --- Distribution Name | NixOS Distribution Version | nixos-unstable 2022-09-22 Kernel Version | 5.18.19 Architecture | x86_64 OpenZFS Version | zfs-2.1.5-1 <!-- Command to find OpenZFS version: zfs version Commands to find kernel version: uname -r # Linux freebsd-version -r # FreeBSD --> ### Describe the problem you're observing NixOS calls `sync -f /nix/store` during system rebuilds. This ends up calling syncfs. Assuming /nix/store is not set up with sync=disabled, syncfs then blocks until the end of the next TXG instead of triggering an early commit. ### Describe how to reproduce the problem Run `syncfs -f /any/zfs/filesystem`, on a filesystem with sync=standard. For bonus points, run `echo 300 > /sys/module/zfs/parameters/zfs_txg_timeout` first. This is my standard configuration; it dramatically reduces write activity on desktop systems (which tend to repeatedly rewrite the same files), or for servers such as Minecraft. The command will block for up to 5 minutes, instead of returning ~immediately. ### Include any warning/errors/backtraces from the system logs <!-- *IMPORTANT* - Please mark logs and text output from terminal commands or else Github will not display them correctly. An example is provided below. Example: ``` this is an example how log text should be marked (wrap it with ```) ``` --> None; this is working as intended, near as I can tell. However, the intent of `sync -f` is to force an immediate commit; it shouldn't be waiting for the full timeout.
1.0
syncfs(3) hangs until the end of the next TXG - <!-- Please fill out the following template, which will help other contributors address your issue. --> <!-- Thank you for reporting an issue. *IMPORTANT* - Please check our issue tracker before opening a new issue. Additional valuable information can be found in the OpenZFS documentation and mailing list archives. Please fill in as much of the template as possible. --> ### System information <!-- add version after "|" character --> Type | Version/Name --- | --- Distribution Name | NixOS Distribution Version | nixos-unstable 2022-09-22 Kernel Version | 5.18.19 Architecture | x86_64 OpenZFS Version | zfs-2.1.5-1 <!-- Command to find OpenZFS version: zfs version Commands to find kernel version: uname -r # Linux freebsd-version -r # FreeBSD --> ### Describe the problem you're observing NixOS calls `sync -f /nix/store` during system rebuilds. This ends up calling syncfs. Assuming /nix/store is not set up with sync=disabled, syncfs then blocks until the end of the next TXG instead of triggering an early commit. ### Describe how to reproduce the problem Run `syncfs -f /any/zfs/filesystem`, on a filesystem with sync=standard. For bonus points, run `echo 300 > /sys/module/zfs/parameters/zfs_txg_timeout` first. This is my standard configuration; it dramatically reduces write activity on desktop systems (which tend to repeatedly rewrite the same files), or for servers such as Minecraft. The command will block for up to 5 minutes, instead of returning ~immediately. ### Include any warning/errors/backtraces from the system logs <!-- *IMPORTANT* - Please mark logs and text output from terminal commands or else Github will not display them correctly. An example is provided below. Example: ``` this is an example how log text should be marked (wrap it with ```) ``` --> None; this is working as intended, near as I can tell. However, the intent of `sync -f` is to force an immediate commit; it shouldn't be waiting for the full timeout.
defect
syncfs hangs until the end of the next txg thank you for reporting an issue important please check our issue tracker before opening a new issue additional valuable information can be found in the openzfs documentation and mailing list archives please fill in as much of the template as possible system information type version name distribution name nixos distribution version nixos unstable kernel version architecture openzfs version zfs command to find openzfs version zfs version commands to find kernel version uname r linux freebsd version r freebsd describe the problem you re observing nixos calls sync f nix store during system rebuilds this ends up calling syncfs assuming nix store is not set up with sync disabled syncfs then blocks until the end of the next txg instead of triggering an early commit describe how to reproduce the problem run syncfs f any zfs filesystem on a filesystem with sync standard for bonus points run echo sys module zfs parameters zfs txg timeout first this is my standard configuration it dramatically reduces write activity on desktop systems which tend to repeatedly rewrite the same files or for servers such as minecraft the command will block for up to minutes instead of returning immediately include any warning errors backtraces from the system logs important please mark logs and text output from terminal commands or else github will not display them correctly an example is provided below example this is an example how log text should be marked wrap it with none this is working as intended near as i can tell however the intent of sync f is to force an immediate commit it shouldn t be waiting for the full timeout
1
140,145
5,397,799,140
IssuesEvent
2017-02-27 15:31:14
zactayand26/Smak
https://api.github.com/repos/zactayand26/Smak
opened
Format Date Value in Authorization for iPhone Usage
enhancement ios low priority node-js
Currently, the value only displays a numerical day. Instead, this value should be formatted to be used optimally with the iPhone locally
1.0
Format Date Value in Authorization for iPhone Usage - Currently, the value only displays a numerical day. Instead, this value should be formatted to be used optimally with the iPhone locally
non_defect
format date value in authorization for iphone usage currently the value only displays a numerical day instead this value should be formatted to be used optimally with the iphone locally
0
8,135
2,611,453,883
IssuesEvent
2015-02-27 05:01:08
chrsmith/hedgewars
https://api.github.com/repos/chrsmith/hedgewars
closed
0.9.14.1, wolfmarc reported: frontend decoupling from game
auto-migrated Component-Engine Component-Frontend OpSys-All Priority-Low Type-Defect
``` This happened to wolfmarc about twice so far, not sure how to reproduce it. During an online game the frontend suddenly reported an "in lobby" message (or similar) and displayed the _room_ again. The Game however continued and he could play it till its end. After that he noticed that only his messages where displayed in the Frontend, not the ones others wrote. (and the screenshot below seems to display him as the only person in there) == Screenshot == see http://www.dropmocks.com/iNis1 == System == MacBook Pro 5,4 OS: os 10.6.5 Cpu: Intel Core 2, 2,53 Ghz, 2 Cache, L2: 3MB Ram: 4 gb ``` Original issue reported on code.google.com by `sheepyluva` on 20 Dec 2010 at 8:14
1.0
0.9.14.1, wolfmarc reported: frontend decoupling from game - ``` This happened to wolfmarc about twice so far, not sure how to reproduce it. During an online game the frontend suddenly reported an "in lobby" message (or similar) and displayed the _room_ again. The Game however continued and he could play it till its end. After that he noticed that only his messages where displayed in the Frontend, not the ones others wrote. (and the screenshot below seems to display him as the only person in there) == Screenshot == see http://www.dropmocks.com/iNis1 == System == MacBook Pro 5,4 OS: os 10.6.5 Cpu: Intel Core 2, 2,53 Ghz, 2 Cache, L2: 3MB Ram: 4 gb ``` Original issue reported on code.google.com by `sheepyluva` on 20 Dec 2010 at 8:14
defect
wolfmarc reported frontend decoupling from game this happened to wolfmarc about twice so far not sure how to reproduce it during an online game the frontend suddenly reported an in lobby message or similar and displayed the room again the game however continued and he could play it till its end after that he noticed that only his messages where displayed in the frontend not the ones others wrote and the screenshot below seems to display him as the only person in there screenshot see system macbook pro os os cpu intel core ghz cache ram gb original issue reported on code google com by sheepyluva on dec at
1
58,720
16,724,812,738
IssuesEvent
2021-06-10 11:44:58
vector-im/element-web
https://api.github.com/repos/vector-im/element-web
closed
Low-priority section doesn't expand anymore on clicking
A-Room-List P1 S-Major T-Defect X-Regression X-Release-Blocker
### Description Low-priority section doesn't expand anymore on clicking. It can be opened when going from the last room in "Rooms" via alt-down. It can be closes again but doesn't reopen. Neither an element or system restart, nor clear cache fixed this. ### Steps to reproduce - Have some room in low-prio - close low-prio section - Try to open it again. There's nothing in the browser console logs. ### Version information - **Platform**: desktop For the desktop app: - **OS**: Arch - **Version**: 1.7.29-1 (https://github.com/archlinux/svntogit-community/blob/packages/element.io/trunk/PKGBUILD)
1.0
Low-priority section doesn't expand anymore on clicking - ### Description Low-priority section doesn't expand anymore on clicking. It can be opened when going from the last room in "Rooms" via alt-down. It can be closes again but doesn't reopen. Neither an element or system restart, nor clear cache fixed this. ### Steps to reproduce - Have some room in low-prio - close low-prio section - Try to open it again. There's nothing in the browser console logs. ### Version information - **Platform**: desktop For the desktop app: - **OS**: Arch - **Version**: 1.7.29-1 (https://github.com/archlinux/svntogit-community/blob/packages/element.io/trunk/PKGBUILD)
defect
low priority section doesn t expand anymore on clicking description low priority section doesn t expand anymore on clicking it can be opened when going from the last room in rooms via alt down it can be closes again but doesn t reopen neither an element or system restart nor clear cache fixed this steps to reproduce have some room in low prio close low prio section try to open it again there s nothing in the browser console logs version information platform desktop for the desktop app os arch version
1
52,438
13,224,728,372
IssuesEvent
2020-08-17 19:43:34
icecube-trac/tix4
https://api.github.com/repos/icecube-trac/tix4
opened
[genie-icetray] lib detection (Trac #2210)
Incomplete Migration Migrated from Trac combo simulation defect
<details> <summary><em>Migrated from <a href="https://code.icecube.wisc.edu/projects/icecube/ticket/2210">https://code.icecube.wisc.edu/projects/icecube/ticket/2210</a>, reported by david.schultz</summary> <p> ```json { "status": "closed", "changetime": "2019-02-13T14:15:23", "_ts": "1550067323910946", "description": "With py2-v3.1.0, we have a new Genie install in CVMFS. This means new detection challenges:\n\n* LHAPDF\n* PYTHIA\n* libGenVector from ROOT\n\nI've tested manually adding these libraries to the linker command outside of cmake, and it can then link genie-icetray-mkspl correctly. Someone needs to go fix the cmakelists.", "reporter": "david.schultz", "cc": "claudio.kopper, olivas", "resolution": "fixed", "time": "2018-11-27T21:22:06", "component": "combo simulation", "summary": "[genie-icetray] lib detection", "priority": "critical", "keywords": "", "milestone": "", "owner": "", "type": "defect" } ``` </p> </details>
1.0
[genie-icetray] lib detection (Trac #2210) - <details> <summary><em>Migrated from <a href="https://code.icecube.wisc.edu/projects/icecube/ticket/2210">https://code.icecube.wisc.edu/projects/icecube/ticket/2210</a>, reported by david.schultz</summary> <p> ```json { "status": "closed", "changetime": "2019-02-13T14:15:23", "_ts": "1550067323910946", "description": "With py2-v3.1.0, we have a new Genie install in CVMFS. This means new detection challenges:\n\n* LHAPDF\n* PYTHIA\n* libGenVector from ROOT\n\nI've tested manually adding these libraries to the linker command outside of cmake, and it can then link genie-icetray-mkspl correctly. Someone needs to go fix the cmakelists.", "reporter": "david.schultz", "cc": "claudio.kopper, olivas", "resolution": "fixed", "time": "2018-11-27T21:22:06", "component": "combo simulation", "summary": "[genie-icetray] lib detection", "priority": "critical", "keywords": "", "milestone": "", "owner": "", "type": "defect" } ``` </p> </details>
defect
lib detection trac migrated from json status closed changetime ts description with we have a new genie install in cvmfs this means new detection challenges n n lhapdf n pythia n libgenvector from root n ni ve tested manually adding these libraries to the linker command outside of cmake and it can then link genie icetray mkspl correctly someone needs to go fix the cmakelists reporter david schultz cc claudio kopper olivas resolution fixed time component combo simulation summary lib detection priority critical keywords milestone owner type defect
1
72,330
24,054,626,890
IssuesEvent
2022-09-16 15:38:55
matrix-org/synapse
https://api.github.com/repos/matrix-org/synapse
closed
Complement fails to run - `synapse_rust.abi3.so: invalid ELF header`
S-Major T-Defect Z-Dev-Wishlist O-Uncommon A-Testing
- macOS 12.5.1 - Docker desktop 4.6.1 (76265) - Engine: 20.10.13 - Compose: 1.29.2 - `cargo --version` -> `cargo 1.63.0 (fd9c4297c 2022-07-01)` - `rustc --version` -> `rustc 1.63.0 (4b91a6ea7 2022-08-08)` - Synapse: latest `develop` (https://github.com/matrix-org/synapse/commit/12dacecabd27680dc77c17724953ecda0801b5ea) - Complement: latest `main` (https://github.com/matrix-org/complement/commit/d404c9f910d376ccad5eda111c165b56ef65ed17) ``` $ COMPLEMENT_ALWAYS_PRINT_SERVER_LOGS=1 COMPLEMENT_DIR=../complement ./scripts-dev/complement.sh -run TestImportHistoricalMessages ... Traceback (most recent call last): File "/usr/local/lib/python3.9/runpy.py", line 188, in _run_module_as_main mod_name, mod_spec, code = _get_module_details(mod_name, _Error) File "/usr/local/lib/python3.9/runpy.py", line 111, in _get_module_details __import__(pkg_name) File "/usr/local/lib/python3.9/site-packages/synapse/__init__.py", line 23, in <module> from synapse.util.rust import check_rust_lib_up_to_date File "/usr/local/lib/python3.9/site-packages/synapse/util/rust.py", line 20, in <module> from synapse.synapse_rust import get_rust_file_digest ImportError: /usr/local/lib/python3.9/site-packages/synapse/synapse_rust.abi3.so: invalid ELF header ``` <details> <summary>Full terminal output</summary> ``` COMPLEMENT_ALWAYS_PRINT_SERVER_LOGS=1 COMPLEMENT_DIR=../complement ./scripts-dev/complement.sh -run TestImportHistoricalMessages [+] Building 2.1s (28/28) FINISHED => [internal] load build definition from Dockerfile 0.1s => => transferring dockerfile: 6.12kB 0.0s => [internal] load .dockerignore 0.1s => => transferring context: 35B 0.0s => resolve image config for docker.io/docker/dockerfile:1 0.8s => CACHED docker-image://docker.io/docker/dockerfile:1@sha256:9ba7531bd80fb0a858632727cf7a112fbfd19b17e94c4e84ced81e24ef1a0dbc 0.0s => [internal] load build definition from Dockerfile 0.0s => [internal] load .dockerignore 0.0s => [internal] load metadata for docker.io/library/python:3.9-slim 0.4s => [internal] load build context 0.3s => => transferring context: 8.79MB 0.3s => [requirements 1/6] FROM docker.io/library/python:3.9-slim@sha256:031f73dcb29e616220ea3acf6cce822226f17971647cd682909d7c6a87feffd2 0.0s => CACHED [stage-2 2/5] RUN --mount=type=cache,target=/var/cache/apt,sharing=locked --mount=type=cache,target=/var/lib/apt,sharing=locked apt-get update -qq && apt-get 0.0s => CACHED [builder 2/10] RUN --mount=type=cache,target=/var/cache/apt,sharing=locked --mount=type=cache,target=/var/lib/apt,sharing=locked apt-get update -qq && apt-ge 0.0s => CACHED [builder 3/10] RUN mkdir /rust /cargo 0.0s => CACHED [builder 4/10] RUN curl -sSf https://sh.rustup.rs | sh -s -- -y --no-modify-path --default-toolchain stable 0.0s => CACHED [requirements 2/6] RUN --mount=type=cache,target=/var/cache/apt,sharing=locked --mount=type=cache,target=/var/lib/apt,sharing=locked apt-get update -qq && 0.0s => CACHED [requirements 3/6] RUN --mount=type=cache,target=/root/.cache/pip pip install --user "poetry==1.2.0" 0.0s => CACHED [requirements 4/6] WORKDIR /synapse 0.0s => CACHED [requirements 5/6] COPY pyproject.toml poetry.lock /synapse/ 0.0s => CACHED [requirements 6/6] RUN if [ -z "$TEST_ONLY_IGNORE_POETRY_LOCKFILE" ]; then /root/.local/bin/poetry export --extras all -o /synapse/requirements.txt ${TEST_ONLY_S 0.0s => CACHED [builder 5/10] COPY --from=requirements /synapse/requirements.txt /synapse/ 0.0s => CACHED [builder 6/10] RUN --mount=type=cache,target=/root/.cache/pip pip install --prefix="/install" --no-deps --no-warn-script-location -r /synapse/requirements.txt 0.0s => CACHED [builder 7/10] COPY synapse /synapse/synapse/ 0.0s => CACHED [builder 8/10] COPY rust /synapse/rust/ 0.0s => CACHED [builder 9/10] COPY pyproject.toml README.rst build_rust.py /synapse/ 0.0s => CACHED [builder 10/10] RUN if [ -z "$TEST_ONLY_IGNORE_POETRY_LOCKFILE" ]; then pip install --prefix="/install" --no-deps --no-warn-script-location /synapse[all]; else 0.0s => CACHED [stage-2 3/5] COPY --from=builder /install /usr/local 0.0s => CACHED [stage-2 4/5] COPY ./docker/start.py /start.py 0.0s => CACHED [stage-2 5/5] COPY ./docker/conf /conf 0.0s => exporting to image 0.0s => => exporting layers 0.0s => => writing image sha256:6a0da90e6f95458e86c9751fc1003ba023cab25e187162d46f0c80214354b52e 0.0s => => naming to docker.io/matrixdotorg/synapse 0.0s Use 'docker scan' to run Snyk tests against images to find vulnerabilities and learn how to fix them [+] Building 1.6s (28/28) FINISHED => [internal] load build definition from Dockerfile-workers 0.0s => => transferring dockerfile: 2.59kB 0.0s => [internal] load .dockerignore 0.0s => => transferring context: 35B 0.0s => resolve image config for docker.io/docker/dockerfile:1 0.2s => CACHED docker-image://docker.io/docker/dockerfile:1@sha256:9ba7531bd80fb0a858632727cf7a112fbfd19b17e94c4e84ced81e24ef1a0dbc 0.0s => [internal] load build definition from Dockerfile-workers 0.0s => [internal] load .dockerignore 0.0s => [internal] load metadata for docker.io/matrixdotorg/synapse:latest 0.0s => [internal] load metadata for docker.io/library/redis:6-bullseye 0.7s => [internal] load metadata for docker.io/library/debian:bullseye-slim 0.7s => [internal] load build context 0.0s => => transferring context: 30.13kB 0.0s => [deps_base 1/2] FROM docker.io/library/debian:bullseye-slim@sha256:5cf1d98cd0805951484f33b34c1ab25aac7007bb41c8b9901d97e4be3cf3ab04 0.0s => [redis_base 1/1] FROM docker.io/library/redis:6-bullseye@sha256:664d7932b44306ea2943ffde0b2cce4596198b88b81d22c73607ce572d55c63a 0.0s => [stage-2 1/14] FROM docker.io/matrixdotorg/synapse:latest 0.0s => CACHED [stage-2 2/14] RUN --mount=type=cache,target=/root/.cache/pip pip install supervisor~=4.2 0.0s => CACHED [stage-2 3/14] RUN mkdir -p /etc/supervisor/conf.d 0.0s => CACHED [stage-2 4/14] COPY --from=redis_base /usr/local/bin/redis-server /usr/local/bin 0.0s => CACHED [deps_base 2/2] RUN --mount=type=cache,target=/var/cache/apt,sharing=locked --mount=type=cache,target=/var/lib/apt,sharing=locked apt-get update 0.0s => CACHED [stage-2 5/14] COPY --from=deps_base /usr/sbin/nginx /usr/sbin 0.0s => CACHED [stage-2 6/14] COPY --from=deps_base /usr/share/nginx /usr/share/nginx 0.0s => CACHED [stage-2 7/14] COPY --from=deps_base /usr/lib/nginx /usr/lib/nginx 0.0s => CACHED [stage-2 8/14] COPY --from=deps_base /etc/nginx /etc/nginx 0.0s => CACHED [stage-2 9/14] RUN rm /etc/nginx/sites-enabled/default 0.0s => CACHED [stage-2 10/14] RUN mkdir /var/log/nginx /var/lib/nginx 0.0s => CACHED [stage-2 11/14] RUN chown www-data /var/log/nginx /var/lib/nginx 0.0s => CACHED [stage-2 12/14] COPY ./docker/conf-workers/* /conf/ 0.0s => CACHED [stage-2 13/14] COPY ./docker/prefix-log /usr/local/bin/ 0.0s => CACHED [stage-2 14/14] COPY ./docker/configure_workers_and_start.py /configure_workers_and_start.py 0.0s => exporting to image 0.1s => => exporting layers 0.0s => => writing image sha256:8cb8fa71f2b695e8b87e27887badf7df2505b40b0d0045a31a27d269aae64e9a 0.0s => => naming to docker.io/matrixdotorg/synapse-workers 0.0s Use 'docker scan' to run Snyk tests against images to find vulnerabilities and learn how to fix them [+] Building 1.3s (25/25) FINISHED => [internal] load build definition from Dockerfile 0.0s => => transferring dockerfile: 37B 0.0s => [internal] load .dockerignore 0.0s => => transferring context: 2B 0.0s => resolve image config for docker.io/docker/dockerfile:1 0.1s => CACHED docker-image://docker.io/docker/dockerfile:1@sha256:9ba7531bd80fb0a858632727cf7a112fbfd19b17e94c4e84ced81e24ef1a0dbc 0.0s => [internal] load .dockerignore 0.0s => [internal] load build definition from Dockerfile 0.0s => [internal] load metadata for docker.io/matrixdotorg/synapse-workers:latest 0.0s => [internal] load metadata for docker.io/library/postgres:13-bullseye 0.6s => [postgres_base 1/4] FROM docker.io/library/postgres:13-bullseye@sha256:e19ef49637c613183f56be5fa99ec33bcab03fd1a4279445a2f5ac224668bf03 0.0s => [internal] load build context 0.0s => => transferring context: 177B 0.0s => [stage-1 1/11] FROM docker.io/matrixdotorg/synapse-workers:latest 0.0s => CACHED [stage-1 2/11] RUN adduser --system --uid 999 postgres --home /var/lib/postgresql 0.0s => CACHED [postgres_base 2/4] RUN gosu postgres initdb --locale=C --encoding=UTF-8 --auth-host password 0.0s => CACHED [postgres_base 3/4] RUN echo "ALTER USER postgres PASSWORD 'somesecret'" | gosu postgres postgres --single 0.0s => CACHED [postgres_base 4/4] RUN echo "CREATE DATABASE synapse" | gosu postgres postgres --single 0.0s => CACHED [stage-1 3/11] COPY --from=postgres_base /var/lib/postgresql /var/lib/postgresql 0.0s => CACHED [stage-1 4/11] COPY --from=postgres_base /usr/lib/postgresql /usr/lib/postgresql 0.0s => CACHED [stage-1 5/11] COPY --from=postgres_base /usr/share/postgresql /usr/share/postgresql 0.0s => CACHED [stage-1 6/11] RUN mkdir /var/run/postgresql && chown postgres /var/run/postgresql 0.0s => CACHED [stage-1 7/11] RUN mv /conf/shared.yaml.j2 /conf/shared-orig.yaml.j2 0.0s => CACHED [stage-1 8/11] COPY conf/workers-shared-extra.yaml.j2 /conf/shared.yaml.j2 0.0s => CACHED [stage-1 9/11] WORKDIR /data 0.0s => CACHED [stage-1 10/11] COPY conf/postgres.supervisord.conf /etc/supervisor/conf.d/postgres.conf 0.0s => CACHED [stage-1 11/11] COPY conf/start_for_complement.sh / 0.0s => exporting to image 0.1s => => exporting layers 0.0s => => writing image sha256:451ac8c1a9fe37b0697225300be78841be83467a519f11c8e45856dedf22526e 0.0s => => naming to docker.io/library/complement-synapse 0.0s Use 'docker scan' to run Snyk tests against images to find vulnerabilities and learn how to fix them Images built; running complement 2022/09/13 17:45:02 config: &{BaseImageURI:complement-synapse BaseImageURIs:map[] DebugLoggingEnabled:false AlwaysPrintServerLogs:true BestEffort:false EnvVarsPropagatePrefix:PASS_ SpawnHSTimeout:30s KeepBlueprints:[] HostMounts:[] PackageNamespace:fed CACertificate:0xc000666000 CAPrivateKey:0xc00080f080} === RUN TestImportHistoricalMessages 2022/09/13 17:45:02 Sharing [SERVER_NAME=hs1 SYNAPSE_COMPLEMENT_DATABASE=sqlite SYNAPSE_COMPLEMENT_USE_WORKERS=] host environment variables with container 2022/09/13 17:45:33 fed.hs_with_application_service.hs1 : failed to deployBaseImage: fed.hs_with_application_service.hs1: failed to check server is up. timed out checking for homeserver to be up: inspect container 517a6c0340f058fa21b4a3a538d0b9d0f468943de9bd74b0b702d3764d49985f => health: unhealthy 2022/09/13 17:45:33 ============================================ 2022/09/13 17:45:33 fed.hs_with_application_service.hs1 : Server logs: Complement Synapse launcher Args: Env: SYNAPSE_COMPLEMENT_DATABASE=sqlite SYNAPSE_COMPLEMENT_USE_WORKERS= Generating RSA private key, 2048 bit long modulus (2 primes) .............+++++ .................+++++ e is 65537 (0x010001) Signature ok subject=CN = hs1 Getting CA Private Key DNS:hs1 Generating a random secret for SYNAPSE_REGISTRATION_SHARED_SECRET Generating a random secret for SYNAPSE_MACAROON_SECRET_KEY Generating synapse config file /data/homeserver.yaml Generating log config file /data/log.config Setting ownership on /data to 991:991 Traceback (most recent call last): File "/usr/local/lib/python3.9/runpy.py", line 188, in _run_module_as_main mod_name, mod_spec, code = _get_module_details(mod_name, _Error) File "/usr/local/lib/python3.9/runpy.py", line 111, in _get_module_details __import__(pkg_name) File "/usr/local/lib/python3.9/site-packages/synapse/__init__.py", line 23, in <module> from synapse.util.rust import check_rust_lib_up_to_date File "/usr/local/lib/python3.9/site-packages/synapse/util/rust.py", line 20, in <module> from synapse.synapse_rust import get_rust_file_digest ImportError: /usr/local/lib/python3.9/site-packages/synapse/synapse_rust.abi3.so: invalid ELF header Traceback (most recent call last): File "/start.py", line 276, in <module> main(sys.argv, os.environ) File "/start.py", line 216, in main return generate_config_from_template( File "/start.py", line 137, in generate_config_from_template subprocess.check_output(args) File "/usr/local/lib/python3.9/subprocess.py", line 424, in check_output return run(*popenargs, stdout=PIPE, timeout=timeout, check=True, File "/usr/local/lib/python3.9/subprocess.py", line 528, in run raise CalledProcessError(retcode, process.args, subprocess.CalledProcessError: Command '['gosu', '991:991', '/usr/local/bin/python', '-m', 'synapse.app.homeserver', '--config-path', '/data/homeserver.yaml', '--keys-directory', '/data', '--generate-keys']' returned non-zero exit status 1. Generating base homeserver config Traceback (most recent call last): File "/configure_workers_and_start.py", line 619, in <module> main(sys.argv, os.environ) File "/configure_workers_and_start.py", line 594, in main generate_base_homeserver_config() File "/configure_workers_and_start.py", line 302, in generate_base_homeserver_config subprocess.check_output(["/usr/local/bin/python", "/start.py", "migrate_config"]) File "/usr/local/lib/python3.9/subprocess.py", line 424, in check_output return run(*popenargs, stdout=PIPE, timeout=timeout, check=True, File "/usr/local/lib/python3.9/subprocess.py", line 528, in run raise CalledProcessError(retcode, process.args, subprocess.CalledProcessError: Command '['/usr/local/bin/python', '/start.py', 'migrate_config']' returned non-zero exit status 1. 2022/09/13 17:45:33 ============== fed.hs_with_application_service.hs1 : END LOGS ============== msc2716_test.go:64: Deploy: Failed to construct blueprint: ConstructBlueprintIfNotExist(hs_with_application_service): failed to ConstructBlueprint: errors whilst constructing blueprint hs_with_application_service: [fed.hs_with_application_service.hs1: failed to check server is up. timed out checking for homeserver to be up: inspect container 517a6c0340f058fa21b4a3a538d0b9d0f468943de9bd74b0b702d3764d49985f => health: unhealthy] --- FAIL: TestImportHistoricalMessages (30.81s) FAIL FAIL github.com/matrix-org/complement/tests 33.760s 2022/09/13 17:45:00 config: &{BaseImageURI:complement-synapse BaseImageURIs:map[] DebugLoggingEnabled:false AlwaysPrintServerLogs:true BestEffort:false EnvVarsPropagatePrefix:PASS_ SpawnHSTimeout:30s KeepBlueprints:[] HostMounts:[] PackageNamespace:csapi CACertificate:0xc000590580 CAPrivateKey:0xc000021980} testing: warning: no tests to run PASS ok github.com/matrix-org/complement/tests/csapi 0.932s [no tests to run] FAIL ``` </details>
1.0
Complement fails to run - `synapse_rust.abi3.so: invalid ELF header` - - macOS 12.5.1 - Docker desktop 4.6.1 (76265) - Engine: 20.10.13 - Compose: 1.29.2 - `cargo --version` -> `cargo 1.63.0 (fd9c4297c 2022-07-01)` - `rustc --version` -> `rustc 1.63.0 (4b91a6ea7 2022-08-08)` - Synapse: latest `develop` (https://github.com/matrix-org/synapse/commit/12dacecabd27680dc77c17724953ecda0801b5ea) - Complement: latest `main` (https://github.com/matrix-org/complement/commit/d404c9f910d376ccad5eda111c165b56ef65ed17) ``` $ COMPLEMENT_ALWAYS_PRINT_SERVER_LOGS=1 COMPLEMENT_DIR=../complement ./scripts-dev/complement.sh -run TestImportHistoricalMessages ... Traceback (most recent call last): File "/usr/local/lib/python3.9/runpy.py", line 188, in _run_module_as_main mod_name, mod_spec, code = _get_module_details(mod_name, _Error) File "/usr/local/lib/python3.9/runpy.py", line 111, in _get_module_details __import__(pkg_name) File "/usr/local/lib/python3.9/site-packages/synapse/__init__.py", line 23, in <module> from synapse.util.rust import check_rust_lib_up_to_date File "/usr/local/lib/python3.9/site-packages/synapse/util/rust.py", line 20, in <module> from synapse.synapse_rust import get_rust_file_digest ImportError: /usr/local/lib/python3.9/site-packages/synapse/synapse_rust.abi3.so: invalid ELF header ``` <details> <summary>Full terminal output</summary> ``` COMPLEMENT_ALWAYS_PRINT_SERVER_LOGS=1 COMPLEMENT_DIR=../complement ./scripts-dev/complement.sh -run TestImportHistoricalMessages [+] Building 2.1s (28/28) FINISHED => [internal] load build definition from Dockerfile 0.1s => => transferring dockerfile: 6.12kB 0.0s => [internal] load .dockerignore 0.1s => => transferring context: 35B 0.0s => resolve image config for docker.io/docker/dockerfile:1 0.8s => CACHED docker-image://docker.io/docker/dockerfile:1@sha256:9ba7531bd80fb0a858632727cf7a112fbfd19b17e94c4e84ced81e24ef1a0dbc 0.0s => [internal] load build definition from Dockerfile 0.0s => [internal] load .dockerignore 0.0s => [internal] load metadata for docker.io/library/python:3.9-slim 0.4s => [internal] load build context 0.3s => => transferring context: 8.79MB 0.3s => [requirements 1/6] FROM docker.io/library/python:3.9-slim@sha256:031f73dcb29e616220ea3acf6cce822226f17971647cd682909d7c6a87feffd2 0.0s => CACHED [stage-2 2/5] RUN --mount=type=cache,target=/var/cache/apt,sharing=locked --mount=type=cache,target=/var/lib/apt,sharing=locked apt-get update -qq && apt-get 0.0s => CACHED [builder 2/10] RUN --mount=type=cache,target=/var/cache/apt,sharing=locked --mount=type=cache,target=/var/lib/apt,sharing=locked apt-get update -qq && apt-ge 0.0s => CACHED [builder 3/10] RUN mkdir /rust /cargo 0.0s => CACHED [builder 4/10] RUN curl -sSf https://sh.rustup.rs | sh -s -- -y --no-modify-path --default-toolchain stable 0.0s => CACHED [requirements 2/6] RUN --mount=type=cache,target=/var/cache/apt,sharing=locked --mount=type=cache,target=/var/lib/apt,sharing=locked apt-get update -qq && 0.0s => CACHED [requirements 3/6] RUN --mount=type=cache,target=/root/.cache/pip pip install --user "poetry==1.2.0" 0.0s => CACHED [requirements 4/6] WORKDIR /synapse 0.0s => CACHED [requirements 5/6] COPY pyproject.toml poetry.lock /synapse/ 0.0s => CACHED [requirements 6/6] RUN if [ -z "$TEST_ONLY_IGNORE_POETRY_LOCKFILE" ]; then /root/.local/bin/poetry export --extras all -o /synapse/requirements.txt ${TEST_ONLY_S 0.0s => CACHED [builder 5/10] COPY --from=requirements /synapse/requirements.txt /synapse/ 0.0s => CACHED [builder 6/10] RUN --mount=type=cache,target=/root/.cache/pip pip install --prefix="/install" --no-deps --no-warn-script-location -r /synapse/requirements.txt 0.0s => CACHED [builder 7/10] COPY synapse /synapse/synapse/ 0.0s => CACHED [builder 8/10] COPY rust /synapse/rust/ 0.0s => CACHED [builder 9/10] COPY pyproject.toml README.rst build_rust.py /synapse/ 0.0s => CACHED [builder 10/10] RUN if [ -z "$TEST_ONLY_IGNORE_POETRY_LOCKFILE" ]; then pip install --prefix="/install" --no-deps --no-warn-script-location /synapse[all]; else 0.0s => CACHED [stage-2 3/5] COPY --from=builder /install /usr/local 0.0s => CACHED [stage-2 4/5] COPY ./docker/start.py /start.py 0.0s => CACHED [stage-2 5/5] COPY ./docker/conf /conf 0.0s => exporting to image 0.0s => => exporting layers 0.0s => => writing image sha256:6a0da90e6f95458e86c9751fc1003ba023cab25e187162d46f0c80214354b52e 0.0s => => naming to docker.io/matrixdotorg/synapse 0.0s Use 'docker scan' to run Snyk tests against images to find vulnerabilities and learn how to fix them [+] Building 1.6s (28/28) FINISHED => [internal] load build definition from Dockerfile-workers 0.0s => => transferring dockerfile: 2.59kB 0.0s => [internal] load .dockerignore 0.0s => => transferring context: 35B 0.0s => resolve image config for docker.io/docker/dockerfile:1 0.2s => CACHED docker-image://docker.io/docker/dockerfile:1@sha256:9ba7531bd80fb0a858632727cf7a112fbfd19b17e94c4e84ced81e24ef1a0dbc 0.0s => [internal] load build definition from Dockerfile-workers 0.0s => [internal] load .dockerignore 0.0s => [internal] load metadata for docker.io/matrixdotorg/synapse:latest 0.0s => [internal] load metadata for docker.io/library/redis:6-bullseye 0.7s => [internal] load metadata for docker.io/library/debian:bullseye-slim 0.7s => [internal] load build context 0.0s => => transferring context: 30.13kB 0.0s => [deps_base 1/2] FROM docker.io/library/debian:bullseye-slim@sha256:5cf1d98cd0805951484f33b34c1ab25aac7007bb41c8b9901d97e4be3cf3ab04 0.0s => [redis_base 1/1] FROM docker.io/library/redis:6-bullseye@sha256:664d7932b44306ea2943ffde0b2cce4596198b88b81d22c73607ce572d55c63a 0.0s => [stage-2 1/14] FROM docker.io/matrixdotorg/synapse:latest 0.0s => CACHED [stage-2 2/14] RUN --mount=type=cache,target=/root/.cache/pip pip install supervisor~=4.2 0.0s => CACHED [stage-2 3/14] RUN mkdir -p /etc/supervisor/conf.d 0.0s => CACHED [stage-2 4/14] COPY --from=redis_base /usr/local/bin/redis-server /usr/local/bin 0.0s => CACHED [deps_base 2/2] RUN --mount=type=cache,target=/var/cache/apt,sharing=locked --mount=type=cache,target=/var/lib/apt,sharing=locked apt-get update 0.0s => CACHED [stage-2 5/14] COPY --from=deps_base /usr/sbin/nginx /usr/sbin 0.0s => CACHED [stage-2 6/14] COPY --from=deps_base /usr/share/nginx /usr/share/nginx 0.0s => CACHED [stage-2 7/14] COPY --from=deps_base /usr/lib/nginx /usr/lib/nginx 0.0s => CACHED [stage-2 8/14] COPY --from=deps_base /etc/nginx /etc/nginx 0.0s => CACHED [stage-2 9/14] RUN rm /etc/nginx/sites-enabled/default 0.0s => CACHED [stage-2 10/14] RUN mkdir /var/log/nginx /var/lib/nginx 0.0s => CACHED [stage-2 11/14] RUN chown www-data /var/log/nginx /var/lib/nginx 0.0s => CACHED [stage-2 12/14] COPY ./docker/conf-workers/* /conf/ 0.0s => CACHED [stage-2 13/14] COPY ./docker/prefix-log /usr/local/bin/ 0.0s => CACHED [stage-2 14/14] COPY ./docker/configure_workers_and_start.py /configure_workers_and_start.py 0.0s => exporting to image 0.1s => => exporting layers 0.0s => => writing image sha256:8cb8fa71f2b695e8b87e27887badf7df2505b40b0d0045a31a27d269aae64e9a 0.0s => => naming to docker.io/matrixdotorg/synapse-workers 0.0s Use 'docker scan' to run Snyk tests against images to find vulnerabilities and learn how to fix them [+] Building 1.3s (25/25) FINISHED => [internal] load build definition from Dockerfile 0.0s => => transferring dockerfile: 37B 0.0s => [internal] load .dockerignore 0.0s => => transferring context: 2B 0.0s => resolve image config for docker.io/docker/dockerfile:1 0.1s => CACHED docker-image://docker.io/docker/dockerfile:1@sha256:9ba7531bd80fb0a858632727cf7a112fbfd19b17e94c4e84ced81e24ef1a0dbc 0.0s => [internal] load .dockerignore 0.0s => [internal] load build definition from Dockerfile 0.0s => [internal] load metadata for docker.io/matrixdotorg/synapse-workers:latest 0.0s => [internal] load metadata for docker.io/library/postgres:13-bullseye 0.6s => [postgres_base 1/4] FROM docker.io/library/postgres:13-bullseye@sha256:e19ef49637c613183f56be5fa99ec33bcab03fd1a4279445a2f5ac224668bf03 0.0s => [internal] load build context 0.0s => => transferring context: 177B 0.0s => [stage-1 1/11] FROM docker.io/matrixdotorg/synapse-workers:latest 0.0s => CACHED [stage-1 2/11] RUN adduser --system --uid 999 postgres --home /var/lib/postgresql 0.0s => CACHED [postgres_base 2/4] RUN gosu postgres initdb --locale=C --encoding=UTF-8 --auth-host password 0.0s => CACHED [postgres_base 3/4] RUN echo "ALTER USER postgres PASSWORD 'somesecret'" | gosu postgres postgres --single 0.0s => CACHED [postgres_base 4/4] RUN echo "CREATE DATABASE synapse" | gosu postgres postgres --single 0.0s => CACHED [stage-1 3/11] COPY --from=postgres_base /var/lib/postgresql /var/lib/postgresql 0.0s => CACHED [stage-1 4/11] COPY --from=postgres_base /usr/lib/postgresql /usr/lib/postgresql 0.0s => CACHED [stage-1 5/11] COPY --from=postgres_base /usr/share/postgresql /usr/share/postgresql 0.0s => CACHED [stage-1 6/11] RUN mkdir /var/run/postgresql && chown postgres /var/run/postgresql 0.0s => CACHED [stage-1 7/11] RUN mv /conf/shared.yaml.j2 /conf/shared-orig.yaml.j2 0.0s => CACHED [stage-1 8/11] COPY conf/workers-shared-extra.yaml.j2 /conf/shared.yaml.j2 0.0s => CACHED [stage-1 9/11] WORKDIR /data 0.0s => CACHED [stage-1 10/11] COPY conf/postgres.supervisord.conf /etc/supervisor/conf.d/postgres.conf 0.0s => CACHED [stage-1 11/11] COPY conf/start_for_complement.sh / 0.0s => exporting to image 0.1s => => exporting layers 0.0s => => writing image sha256:451ac8c1a9fe37b0697225300be78841be83467a519f11c8e45856dedf22526e 0.0s => => naming to docker.io/library/complement-synapse 0.0s Use 'docker scan' to run Snyk tests against images to find vulnerabilities and learn how to fix them Images built; running complement 2022/09/13 17:45:02 config: &{BaseImageURI:complement-synapse BaseImageURIs:map[] DebugLoggingEnabled:false AlwaysPrintServerLogs:true BestEffort:false EnvVarsPropagatePrefix:PASS_ SpawnHSTimeout:30s KeepBlueprints:[] HostMounts:[] PackageNamespace:fed CACertificate:0xc000666000 CAPrivateKey:0xc00080f080} === RUN TestImportHistoricalMessages 2022/09/13 17:45:02 Sharing [SERVER_NAME=hs1 SYNAPSE_COMPLEMENT_DATABASE=sqlite SYNAPSE_COMPLEMENT_USE_WORKERS=] host environment variables with container 2022/09/13 17:45:33 fed.hs_with_application_service.hs1 : failed to deployBaseImage: fed.hs_with_application_service.hs1: failed to check server is up. timed out checking for homeserver to be up: inspect container 517a6c0340f058fa21b4a3a538d0b9d0f468943de9bd74b0b702d3764d49985f => health: unhealthy 2022/09/13 17:45:33 ============================================ 2022/09/13 17:45:33 fed.hs_with_application_service.hs1 : Server logs: Complement Synapse launcher Args: Env: SYNAPSE_COMPLEMENT_DATABASE=sqlite SYNAPSE_COMPLEMENT_USE_WORKERS= Generating RSA private key, 2048 bit long modulus (2 primes) .............+++++ .................+++++ e is 65537 (0x010001) Signature ok subject=CN = hs1 Getting CA Private Key DNS:hs1 Generating a random secret for SYNAPSE_REGISTRATION_SHARED_SECRET Generating a random secret for SYNAPSE_MACAROON_SECRET_KEY Generating synapse config file /data/homeserver.yaml Generating log config file /data/log.config Setting ownership on /data to 991:991 Traceback (most recent call last): File "/usr/local/lib/python3.9/runpy.py", line 188, in _run_module_as_main mod_name, mod_spec, code = _get_module_details(mod_name, _Error) File "/usr/local/lib/python3.9/runpy.py", line 111, in _get_module_details __import__(pkg_name) File "/usr/local/lib/python3.9/site-packages/synapse/__init__.py", line 23, in <module> from synapse.util.rust import check_rust_lib_up_to_date File "/usr/local/lib/python3.9/site-packages/synapse/util/rust.py", line 20, in <module> from synapse.synapse_rust import get_rust_file_digest ImportError: /usr/local/lib/python3.9/site-packages/synapse/synapse_rust.abi3.so: invalid ELF header Traceback (most recent call last): File "/start.py", line 276, in <module> main(sys.argv, os.environ) File "/start.py", line 216, in main return generate_config_from_template( File "/start.py", line 137, in generate_config_from_template subprocess.check_output(args) File "/usr/local/lib/python3.9/subprocess.py", line 424, in check_output return run(*popenargs, stdout=PIPE, timeout=timeout, check=True, File "/usr/local/lib/python3.9/subprocess.py", line 528, in run raise CalledProcessError(retcode, process.args, subprocess.CalledProcessError: Command '['gosu', '991:991', '/usr/local/bin/python', '-m', 'synapse.app.homeserver', '--config-path', '/data/homeserver.yaml', '--keys-directory', '/data', '--generate-keys']' returned non-zero exit status 1. Generating base homeserver config Traceback (most recent call last): File "/configure_workers_and_start.py", line 619, in <module> main(sys.argv, os.environ) File "/configure_workers_and_start.py", line 594, in main generate_base_homeserver_config() File "/configure_workers_and_start.py", line 302, in generate_base_homeserver_config subprocess.check_output(["/usr/local/bin/python", "/start.py", "migrate_config"]) File "/usr/local/lib/python3.9/subprocess.py", line 424, in check_output return run(*popenargs, stdout=PIPE, timeout=timeout, check=True, File "/usr/local/lib/python3.9/subprocess.py", line 528, in run raise CalledProcessError(retcode, process.args, subprocess.CalledProcessError: Command '['/usr/local/bin/python', '/start.py', 'migrate_config']' returned non-zero exit status 1. 2022/09/13 17:45:33 ============== fed.hs_with_application_service.hs1 : END LOGS ============== msc2716_test.go:64: Deploy: Failed to construct blueprint: ConstructBlueprintIfNotExist(hs_with_application_service): failed to ConstructBlueprint: errors whilst constructing blueprint hs_with_application_service: [fed.hs_with_application_service.hs1: failed to check server is up. timed out checking for homeserver to be up: inspect container 517a6c0340f058fa21b4a3a538d0b9d0f468943de9bd74b0b702d3764d49985f => health: unhealthy] --- FAIL: TestImportHistoricalMessages (30.81s) FAIL FAIL github.com/matrix-org/complement/tests 33.760s 2022/09/13 17:45:00 config: &{BaseImageURI:complement-synapse BaseImageURIs:map[] DebugLoggingEnabled:false AlwaysPrintServerLogs:true BestEffort:false EnvVarsPropagatePrefix:PASS_ SpawnHSTimeout:30s KeepBlueprints:[] HostMounts:[] PackageNamespace:csapi CACertificate:0xc000590580 CAPrivateKey:0xc000021980} testing: warning: no tests to run PASS ok github.com/matrix-org/complement/tests/csapi 0.932s [no tests to run] FAIL ``` </details>
defect
complement fails to run synapse rust so invalid elf header macos docker desktop engine compose cargo version cargo rustc version rustc synapse latest develop complement latest main complement always print server logs complement dir complement scripts dev complement sh run testimporthistoricalmessages traceback most recent call last file usr local lib runpy py line in run module as main mod name mod spec code get module details mod name error file usr local lib runpy py line in get module details import pkg name file usr local lib site packages synapse init py line in from synapse util rust import check rust lib up to date file usr local lib site packages synapse util rust py line in from synapse synapse rust import get rust file digest importerror usr local lib site packages synapse synapse rust so invalid elf header full terminal output complement always print server logs complement dir complement scripts dev complement sh run testimporthistoricalmessages building finished load build definition from dockerfile transferring dockerfile load dockerignore transferring context resolve image config for docker io docker dockerfile cached docker image docker io docker dockerfile load build definition from dockerfile load dockerignore load metadata for docker io library python slim load build context transferring context from docker io library python slim cached run mount type cache target var cache apt sharing locked mount type cache target var lib apt sharing locked apt get update qq apt get cached run mount type cache target var cache apt sharing locked mount type cache target var lib apt sharing locked apt get update qq apt ge cached run mkdir rust cargo cached run curl ssf sh s y no modify path default toolchain stable cached run mount type cache target var cache apt sharing locked mount type cache target var lib apt sharing locked apt get update qq cached run mount type cache target root cache pip pip install user poetry cached workdir synapse cached copy pyproject toml poetry lock synapse cached run if then root local bin poetry export extras all o synapse requirements txt test only s cached copy from requirements synapse requirements txt synapse cached run mount type cache target root cache pip pip install prefix install no deps no warn script location r synapse requirements txt cached copy synapse synapse synapse cached copy rust synapse rust cached copy pyproject toml readme rst build rust py synapse cached run if then pip install prefix install no deps no warn script location synapse else cached copy from builder install usr local cached copy docker start py start py cached copy docker conf conf exporting to image exporting layers writing image naming to docker io matrixdotorg synapse use docker scan to run snyk tests against images to find vulnerabilities and learn how to fix them building finished load build definition from dockerfile workers transferring dockerfile load dockerignore transferring context resolve image config for docker io docker dockerfile cached docker image docker io docker dockerfile load build definition from dockerfile workers load dockerignore load metadata for docker io matrixdotorg synapse latest load metadata for docker io library redis bullseye load metadata for docker io library debian bullseye slim load build context transferring context from docker io library debian bullseye slim from docker io library redis bullseye from docker io matrixdotorg synapse latest cached run mount type cache target root cache pip pip install supervisor cached run mkdir p etc supervisor conf d cached copy from redis base usr local bin redis server usr local bin cached run mount type cache target var cache apt sharing locked mount type cache target var lib apt sharing locked apt get update cached copy from deps base usr sbin nginx usr sbin cached copy from deps base usr share nginx usr share nginx cached copy from deps base usr lib nginx usr lib nginx cached copy from deps base etc nginx etc nginx cached run rm etc nginx sites enabled default cached run mkdir var log nginx var lib nginx cached run chown www data var log nginx var lib nginx cached copy docker conf workers conf cached copy docker prefix log usr local bin cached copy docker configure workers and start py configure workers and start py exporting to image exporting layers writing image naming to docker io matrixdotorg synapse workers use docker scan to run snyk tests against images to find vulnerabilities and learn how to fix them building finished load build definition from dockerfile transferring dockerfile load dockerignore transferring context resolve image config for docker io docker dockerfile cached docker image docker io docker dockerfile load dockerignore load build definition from dockerfile load metadata for docker io matrixdotorg synapse workers latest load metadata for docker io library postgres bullseye from docker io library postgres bullseye load build context transferring context from docker io matrixdotorg synapse workers latest cached run adduser system uid postgres home var lib postgresql cached run gosu postgres initdb locale c encoding utf auth host password cached run echo alter user postgres password somesecret gosu postgres postgres single cached run echo create database synapse gosu postgres postgres single cached copy from postgres base var lib postgresql var lib postgresql cached copy from postgres base usr lib postgresql usr lib postgresql cached copy from postgres base usr share postgresql usr share postgresql cached run mkdir var run postgresql chown postgres var run postgresql cached run mv conf shared yaml conf shared orig yaml cached copy conf workers shared extra yaml conf shared yaml cached workdir data cached copy conf postgres supervisord conf etc supervisor conf d postgres conf cached copy conf start for complement sh exporting to image exporting layers writing image naming to docker io library complement synapse use docker scan to run snyk tests against images to find vulnerabilities and learn how to fix them images built running complement config baseimageuri complement synapse baseimageuris map debugloggingenabled false alwaysprintserverlogs true besteffort false envvarspropagateprefix pass spawnhstimeout keepblueprints hostmounts packagenamespace fed cacertificate caprivatekey run testimporthistoricalmessages sharing host environment variables with container fed hs with application service failed to deploybaseimage fed hs with application service failed to check server is up timed out checking for homeserver to be up inspect container health unhealthy fed hs with application service server logs complement synapse launcher args env synapse complement database sqlite synapse complement use workers generating rsa private key bit long modulus primes e is signature ok subject cn getting ca private key dns generating a random secret for synapse registration shared secret generating a random secret for synapse macaroon secret key generating synapse config file data homeserver yaml generating log config file data log config setting ownership on data to traceback most recent call last file usr local lib runpy py line in run module as main mod name mod spec code get module details mod name error file usr local lib runpy py line in get module details import pkg name file usr local lib site packages synapse init py line in from synapse util rust import check rust lib up to date file usr local lib site packages synapse util rust py line in from synapse synapse rust import get rust file digest importerror usr local lib site packages synapse synapse rust so invalid elf header traceback most recent call last file start py line in main sys argv os environ file start py line in main return generate config from template file start py line in generate config from template subprocess check output args file usr local lib subprocess py line in check output return run popenargs stdout pipe timeout timeout check true file usr local lib subprocess py line in run raise calledprocesserror retcode process args subprocess calledprocesserror command returned non zero exit status generating base homeserver config traceback most recent call last file configure workers and start py line in main sys argv os environ file configure workers and start py line in main generate base homeserver config file configure workers and start py line in generate base homeserver config subprocess check output file usr local lib subprocess py line in check output return run popenargs stdout pipe timeout timeout check true file usr local lib subprocess py line in run raise calledprocesserror retcode process args subprocess calledprocesserror command returned non zero exit status fed hs with application service end logs test go deploy failed to construct blueprint constructblueprintifnotexist hs with application service failed to constructblueprint errors whilst constructing blueprint hs with application service fail testimporthistoricalmessages fail fail github com matrix org complement tests config baseimageuri complement synapse baseimageuris map debugloggingenabled false alwaysprintserverlogs true besteffort false envvarspropagateprefix pass spawnhstimeout keepblueprints hostmounts packagenamespace csapi cacertificate caprivatekey testing warning no tests to run pass ok github com matrix org complement tests csapi fail
1
15,676
6,023,693,327
IssuesEvent
2017-06-08 01:19:59
travis-ci/travis-ci
https://api.github.com/repos/travis-ci/travis-ci
closed
Proposal: travis-build integration testing (via Vagrant)
discussion travis-build
I've got a proposal for how to test scripts generated via travis-build on realistic worker environments. This should help for developing adding Windows support. If you want to jump right in, I created a [gist](https://gist.github.com/maxlinc/11051223) of how the final setup might work. ### Proposal The truth is most of this is made possible by the awesome work @mitchellh has done on vagrant. My proposal is just a Vagrantfile that takes advantage of the new [Vagrant Cloud]() and [Boxes 2.0](https://www.vagrantup.com/blog/feature-preview-vagrant-1-5-boxes-2-0.html) features. The way it works is by creating a Vagrantfile that defines three machines using three boxes: - travis-ci/travis-worker-linux - travis-ci/travis-worker-windows - travis-ci/travis-worker-osx These images can take advantage of the Vagrant cloud for discoverability, multi-provider support, and updates. We can generate build scripts by running travis-build on the host machine, and then using vagrant execute the scripts on the guests for each OS. See below for more details on the workflow. ### Benefits - Run generated scripts in realistic travis-worker environments (on all platforms) - Workers environments are discoverable, updatable - Worker environments can be made available on multiple virtualization platforms so contributors have their choice of local VMs or cloud machines - Depend on images, not the travis-cookbooks/travis-images scripts, so we can "automate the second time we do it" for windows beta images - Does not require the full travis stack - in fact you probably don't even need the full travis worker, just the necessary pieces for a particular script (e.g. rvm) I like the continuous delivery recommendation to "automate the second time". I think we should manually build a Windows image first, because: - It will help the travis-cookbooks/travis-images projects discover what needs to be automated and what's _difficult_ to automate on Windows - It will make a beta image available for travis-build and travis-worker to start testing now, even before the chef/chocolatey/whatever automation is solved ### Workflow This will let you take the output of travis-build, and send it via vagrant (via ssh->bash for Linux, or winrm->powershell for Windows) to execute on a worker. The details for adding and updating VMs is below. We can build updated VMs from travis-images, and then pull them into travis-build for integration testing. #### Add the VM the first time ``` shell $ vagrant box add travis-ci/travis-worker-linux ==> box: Loading metadata for box 'travis-ci/travis-worker-linux' box: URL: https://vagrantcloud.com/travis-ci/travis-worker-linux This box can work with multiple providers! The providers that it can work with are listed below. Please review the list and choose the provider you will be working with. 1) virtualbox 2) vmware_desktop 3) rackspace 4) azure 5) hyperv Enter your choice: 3 ==> box: Adding box 'travis-ci/travis-worker-linux' (v1.0.0) for provider: rackspace box: Download: https://vagrantcloud.com/chef/travis-ci/travis-worker-linux/version/1/provider/rackspace.box ``` We'd probably just start off with a few providers, but if travis-images moves switches to [packer](http://www.packer.io/) at some point, then it'll be easy to create "identical machine images for multiple platforms". #### Updating the VM The Vagrant VMs are also updatable, so contributors to the travis-build project can be notified and get updates when new travis-worker images become available. ``` shell vince@hashicorp: $ vagrant box update travis-ci/travis-worker-linux ==> web: Checking for updates to 'travis-ci/travis-worker-linux': web: Version constraints: >= 0 web: Provider: rackspace ==> web: Updating 'travis-ci/travis-worker-linux' with provider 'rackspace' from version ==> web: '1.0.0' to '1.0.1'... ==> web: Loading metadata for box 'https://vagrantcloud.com/travis-ci/travis-worker-linux' ==> web: Adding box 'travis-ci/travis-worker-linux' (v1.0.1) for provider: virtualbox web: Download: https://vagrantcloud.com/travis-ci/travis-worker-linux/version/2/provider/rackspace.box ==> web: Successfully added box 'travis-ci/travis-worker-linux' (v1.0.1) for 'rackspace'! ``` #### Running tests Vagrant lets you send to a shell. SSH is obviously supported to run bash, but he `vagrant-next` branch has been making progress for WinRM support cmd or powershell. I've successfully spiked using vagrant-rackspace/winrm/powershell from OSX with the `vagrant-next` branch. So all we need to do is have travis build print a script for the desired platform, and then use Vagrant to send it to the target machine. You can do this by putting a shell provisioner on each machine: ``` config.vm.define "windows" do |windows| windows.vm.provision "shell", :inline => `travis run --print --target windows` # ... end ``` But you could also just do it from the command line (which gives you more flexibility to run travis build for different projects). ``` $ vagrant ssh linux -c `travis run --print` $ vagrant ssh windows -c `travis run --print --target windows` $ vagrant ssh osx -c `travis run --print --target osx` ``` Alternatively, you could also have the tests delivered to the travis-worker normally. The reason I'm a fan of just letting vagrant handle the shell is that it'll let us start testing PowerShell on Windows even before the travis-worker is fully working on Windows. If you have a Windows machine with python installed, then you probably don't need the full travis worker to test a generated python build script. ### Next steps If that sounds good, how can we move forward? #### Linux images It should be possible to create, publish, and start testing with travis-ci/travis-worker-linux images. #### OSX images It might be possible to create travis-worker-osx images already as well, but there are fewer virtualization choices and possible more concerns about redistribution/licensing of customized OSX images. Although I'd like pre-built images (and the update mechanism that goes with it), it might be necessary to just spin up a vanilla OSX image and run the cookbooks :\ #### Windows images I don't think Windows support is there yet, but there may be enough experimental Windows support for some contributors to start playing around with images. ##### Developing on Windows? If you're running Windows locally, hyperv is probably your best bet. Microsoft has a beta image available, but you probably can't redistribute customized travis-worker-windows image. So an image may be possible, but not sharable.... ##### Developing on something else? If you're running something else locally, there's a few options: - vagrant-windows is probably the most complete and secure option, but there are no public images available. You'll need to build your own image with your own windows license. - You can also use Windows with Rackspace, AWS or other cloud providers if you install SSH and RSync on the Windows machines (usually via cygwin). This is hacky, but may be less painful than buying a Windows license and your own Virtualbox images. - I have a working spike of vagrant-rackspace using Windows/WinRM, but there isn't an official release. The WinRM support I used is in the `vagrant-next` branch of vagrant. - There is a vagrant-azure provider, but it's WIP and may not be usable yet. Thoughts?
1.0
Proposal: travis-build integration testing (via Vagrant) - I've got a proposal for how to test scripts generated via travis-build on realistic worker environments. This should help for developing adding Windows support. If you want to jump right in, I created a [gist](https://gist.github.com/maxlinc/11051223) of how the final setup might work. ### Proposal The truth is most of this is made possible by the awesome work @mitchellh has done on vagrant. My proposal is just a Vagrantfile that takes advantage of the new [Vagrant Cloud]() and [Boxes 2.0](https://www.vagrantup.com/blog/feature-preview-vagrant-1-5-boxes-2-0.html) features. The way it works is by creating a Vagrantfile that defines three machines using three boxes: - travis-ci/travis-worker-linux - travis-ci/travis-worker-windows - travis-ci/travis-worker-osx These images can take advantage of the Vagrant cloud for discoverability, multi-provider support, and updates. We can generate build scripts by running travis-build on the host machine, and then using vagrant execute the scripts on the guests for each OS. See below for more details on the workflow. ### Benefits - Run generated scripts in realistic travis-worker environments (on all platforms) - Workers environments are discoverable, updatable - Worker environments can be made available on multiple virtualization platforms so contributors have their choice of local VMs or cloud machines - Depend on images, not the travis-cookbooks/travis-images scripts, so we can "automate the second time we do it" for windows beta images - Does not require the full travis stack - in fact you probably don't even need the full travis worker, just the necessary pieces for a particular script (e.g. rvm) I like the continuous delivery recommendation to "automate the second time". I think we should manually build a Windows image first, because: - It will help the travis-cookbooks/travis-images projects discover what needs to be automated and what's _difficult_ to automate on Windows - It will make a beta image available for travis-build and travis-worker to start testing now, even before the chef/chocolatey/whatever automation is solved ### Workflow This will let you take the output of travis-build, and send it via vagrant (via ssh->bash for Linux, or winrm->powershell for Windows) to execute on a worker. The details for adding and updating VMs is below. We can build updated VMs from travis-images, and then pull them into travis-build for integration testing. #### Add the VM the first time ``` shell $ vagrant box add travis-ci/travis-worker-linux ==> box: Loading metadata for box 'travis-ci/travis-worker-linux' box: URL: https://vagrantcloud.com/travis-ci/travis-worker-linux This box can work with multiple providers! The providers that it can work with are listed below. Please review the list and choose the provider you will be working with. 1) virtualbox 2) vmware_desktop 3) rackspace 4) azure 5) hyperv Enter your choice: 3 ==> box: Adding box 'travis-ci/travis-worker-linux' (v1.0.0) for provider: rackspace box: Download: https://vagrantcloud.com/chef/travis-ci/travis-worker-linux/version/1/provider/rackspace.box ``` We'd probably just start off with a few providers, but if travis-images moves switches to [packer](http://www.packer.io/) at some point, then it'll be easy to create "identical machine images for multiple platforms". #### Updating the VM The Vagrant VMs are also updatable, so contributors to the travis-build project can be notified and get updates when new travis-worker images become available. ``` shell vince@hashicorp: $ vagrant box update travis-ci/travis-worker-linux ==> web: Checking for updates to 'travis-ci/travis-worker-linux': web: Version constraints: >= 0 web: Provider: rackspace ==> web: Updating 'travis-ci/travis-worker-linux' with provider 'rackspace' from version ==> web: '1.0.0' to '1.0.1'... ==> web: Loading metadata for box 'https://vagrantcloud.com/travis-ci/travis-worker-linux' ==> web: Adding box 'travis-ci/travis-worker-linux' (v1.0.1) for provider: virtualbox web: Download: https://vagrantcloud.com/travis-ci/travis-worker-linux/version/2/provider/rackspace.box ==> web: Successfully added box 'travis-ci/travis-worker-linux' (v1.0.1) for 'rackspace'! ``` #### Running tests Vagrant lets you send to a shell. SSH is obviously supported to run bash, but he `vagrant-next` branch has been making progress for WinRM support cmd or powershell. I've successfully spiked using vagrant-rackspace/winrm/powershell from OSX with the `vagrant-next` branch. So all we need to do is have travis build print a script for the desired platform, and then use Vagrant to send it to the target machine. You can do this by putting a shell provisioner on each machine: ``` config.vm.define "windows" do |windows| windows.vm.provision "shell", :inline => `travis run --print --target windows` # ... end ``` But you could also just do it from the command line (which gives you more flexibility to run travis build for different projects). ``` $ vagrant ssh linux -c `travis run --print` $ vagrant ssh windows -c `travis run --print --target windows` $ vagrant ssh osx -c `travis run --print --target osx` ``` Alternatively, you could also have the tests delivered to the travis-worker normally. The reason I'm a fan of just letting vagrant handle the shell is that it'll let us start testing PowerShell on Windows even before the travis-worker is fully working on Windows. If you have a Windows machine with python installed, then you probably don't need the full travis worker to test a generated python build script. ### Next steps If that sounds good, how can we move forward? #### Linux images It should be possible to create, publish, and start testing with travis-ci/travis-worker-linux images. #### OSX images It might be possible to create travis-worker-osx images already as well, but there are fewer virtualization choices and possible more concerns about redistribution/licensing of customized OSX images. Although I'd like pre-built images (and the update mechanism that goes with it), it might be necessary to just spin up a vanilla OSX image and run the cookbooks :\ #### Windows images I don't think Windows support is there yet, but there may be enough experimental Windows support for some contributors to start playing around with images. ##### Developing on Windows? If you're running Windows locally, hyperv is probably your best bet. Microsoft has a beta image available, but you probably can't redistribute customized travis-worker-windows image. So an image may be possible, but not sharable.... ##### Developing on something else? If you're running something else locally, there's a few options: - vagrant-windows is probably the most complete and secure option, but there are no public images available. You'll need to build your own image with your own windows license. - You can also use Windows with Rackspace, AWS or other cloud providers if you install SSH and RSync on the Windows machines (usually via cygwin). This is hacky, but may be less painful than buying a Windows license and your own Virtualbox images. - I have a working spike of vagrant-rackspace using Windows/WinRM, but there isn't an official release. The WinRM support I used is in the `vagrant-next` branch of vagrant. - There is a vagrant-azure provider, but it's WIP and may not be usable yet. Thoughts?
non_defect
proposal travis build integration testing via vagrant i ve got a proposal for how to test scripts generated via travis build on realistic worker environments this should help for developing adding windows support if you want to jump right in i created a of how the final setup might work proposal the truth is most of this is made possible by the awesome work mitchellh has done on vagrant my proposal is just a vagrantfile that takes advantage of the new and features the way it works is by creating a vagrantfile that defines three machines using three boxes travis ci travis worker linux travis ci travis worker windows travis ci travis worker osx these images can take advantage of the vagrant cloud for discoverability multi provider support and updates we can generate build scripts by running travis build on the host machine and then using vagrant execute the scripts on the guests for each os see below for more details on the workflow benefits run generated scripts in realistic travis worker environments on all platforms workers environments are discoverable updatable worker environments can be made available on multiple virtualization platforms so contributors have their choice of local vms or cloud machines depend on images not the travis cookbooks travis images scripts so we can automate the second time we do it for windows beta images does not require the full travis stack in fact you probably don t even need the full travis worker just the necessary pieces for a particular script e g rvm i like the continuous delivery recommendation to automate the second time i think we should manually build a windows image first because it will help the travis cookbooks travis images projects discover what needs to be automated and what s difficult to automate on windows it will make a beta image available for travis build and travis worker to start testing now even before the chef chocolatey whatever automation is solved workflow this will let you take the output of travis build and send it via vagrant via ssh bash for linux or winrm powershell for windows to execute on a worker the details for adding and updating vms is below we can build updated vms from travis images and then pull them into travis build for integration testing add the vm the first time shell vagrant box add travis ci travis worker linux box loading metadata for box travis ci travis worker linux box url this box can work with multiple providers the providers that it can work with are listed below please review the list and choose the provider you will be working with virtualbox vmware desktop rackspace azure hyperv enter your choice box adding box travis ci travis worker linux for provider rackspace box download we d probably just start off with a few providers but if travis images moves switches to at some point then it ll be easy to create identical machine images for multiple platforms updating the vm the vagrant vms are also updatable so contributors to the travis build project can be notified and get updates when new travis worker images become available shell vince hashicorp vagrant box update travis ci travis worker linux web checking for updates to travis ci travis worker linux web version constraints web provider rackspace web updating travis ci travis worker linux with provider rackspace from version web to web loading metadata for box web adding box travis ci travis worker linux for provider virtualbox web download web successfully added box travis ci travis worker linux for rackspace running tests vagrant lets you send to a shell ssh is obviously supported to run bash but he vagrant next branch has been making progress for winrm support cmd or powershell i ve successfully spiked using vagrant rackspace winrm powershell from osx with the vagrant next branch so all we need to do is have travis build print a script for the desired platform and then use vagrant to send it to the target machine you can do this by putting a shell provisioner on each machine config vm define windows do windows windows vm provision shell inline travis run print target windows end but you could also just do it from the command line which gives you more flexibility to run travis build for different projects vagrant ssh linux c travis run print vagrant ssh windows c travis run print target windows vagrant ssh osx c travis run print target osx alternatively you could also have the tests delivered to the travis worker normally the reason i m a fan of just letting vagrant handle the shell is that it ll let us start testing powershell on windows even before the travis worker is fully working on windows if you have a windows machine with python installed then you probably don t need the full travis worker to test a generated python build script next steps if that sounds good how can we move forward linux images it should be possible to create publish and start testing with travis ci travis worker linux images osx images it might be possible to create travis worker osx images already as well but there are fewer virtualization choices and possible more concerns about redistribution licensing of customized osx images although i d like pre built images and the update mechanism that goes with it it might be necessary to just spin up a vanilla osx image and run the cookbooks windows images i don t think windows support is there yet but there may be enough experimental windows support for some contributors to start playing around with images developing on windows if you re running windows locally hyperv is probably your best bet microsoft has a beta image available but you probably can t redistribute customized travis worker windows image so an image may be possible but not sharable developing on something else if you re running something else locally there s a few options vagrant windows is probably the most complete and secure option but there are no public images available you ll need to build your own image with your own windows license you can also use windows with rackspace aws or other cloud providers if you install ssh and rsync on the windows machines usually via cygwin this is hacky but may be less painful than buying a windows license and your own virtualbox images i have a working spike of vagrant rackspace using windows winrm but there isn t an official release the winrm support i used is in the vagrant next branch of vagrant there is a vagrant azure provider but it s wip and may not be usable yet thoughts
0
361,235
10,706,085,989
IssuesEvent
2019-10-24 14:48:20
kubeflow/kubeflow
https://api.github.com/repos/kubeflow/kubeflow
closed
[GCP] istio-ingressgateway failing health check on /healthz/ready
area/kfctl kind/bug platform/gcp priority/p0
/kind bug I just deployed kubeflow using kfctl_gcp_iap.yaml from master. When I checked the ingress the backend (istio-ingressready) was reported as unhealthy with 0 successful health checks. The health check was set to /healthz/ready which looks like the intended result https://github.com/kubeflow/manifests/blob/21160c1463a46645a5b70e465988723ec1df6ccc/gcp/iap-ingress/base/config-map.yaml#L186 However when I ported forwarded to the ingress gateway I got a 401 on /healthz/ready ``` curl -I http://localhost:8080/healthz/ready HTTP/1.1 401 Unauthorized content-length: 29 content-type: text/plain date: Thu, 10 Oct 2019 05:01:07 GMT server: istio-envoy ``` /healthz gave me a 200. ``` curl -I http://localhost:8080/healthz HTTP/1.1 200 OK x-powered-by: Express accept-ranges: bytes cache-control: public, max-age=0 last-modified: Wed, 09 Oct 2019 22:56:09 GMT etag: W/"599-16db2bc9f28" content-type: text/html; charset=UTF-8 content-length: 1433 date: Thu, 10 Oct 2019 05:01:35 GMT x-envoy-upstream-service-time: 2 server: istio-envoy ``` So I deleted the backend-updater and in the UI set the health check to /healthz. The health check passed and my backend came up. So it looks like the health check is incorrect. Did something change? Could this explain why the endpoint test is ready is failing kubeflow/kfctl#42 /cc @zhenghuiwang @lluunn
1.0
[GCP] istio-ingressgateway failing health check on /healthz/ready - /kind bug I just deployed kubeflow using kfctl_gcp_iap.yaml from master. When I checked the ingress the backend (istio-ingressready) was reported as unhealthy with 0 successful health checks. The health check was set to /healthz/ready which looks like the intended result https://github.com/kubeflow/manifests/blob/21160c1463a46645a5b70e465988723ec1df6ccc/gcp/iap-ingress/base/config-map.yaml#L186 However when I ported forwarded to the ingress gateway I got a 401 on /healthz/ready ``` curl -I http://localhost:8080/healthz/ready HTTP/1.1 401 Unauthorized content-length: 29 content-type: text/plain date: Thu, 10 Oct 2019 05:01:07 GMT server: istio-envoy ``` /healthz gave me a 200. ``` curl -I http://localhost:8080/healthz HTTP/1.1 200 OK x-powered-by: Express accept-ranges: bytes cache-control: public, max-age=0 last-modified: Wed, 09 Oct 2019 22:56:09 GMT etag: W/"599-16db2bc9f28" content-type: text/html; charset=UTF-8 content-length: 1433 date: Thu, 10 Oct 2019 05:01:35 GMT x-envoy-upstream-service-time: 2 server: istio-envoy ``` So I deleted the backend-updater and in the UI set the health check to /healthz. The health check passed and my backend came up. So it looks like the health check is incorrect. Did something change? Could this explain why the endpoint test is ready is failing kubeflow/kfctl#42 /cc @zhenghuiwang @lluunn
non_defect
istio ingressgateway failing health check on healthz ready kind bug i just deployed kubeflow using kfctl gcp iap yaml from master when i checked the ingress the backend istio ingressready was reported as unhealthy with successful health checks the health check was set to healthz ready which looks like the intended result however when i ported forwarded to the ingress gateway i got a on healthz ready curl i http unauthorized content length content type text plain date thu oct gmt server istio envoy healthz gave me a curl i http ok x powered by express accept ranges bytes cache control public max age last modified wed oct gmt etag w content type text html charset utf content length date thu oct gmt x envoy upstream service time server istio envoy so i deleted the backend updater and in the ui set the health check to healthz the health check passed and my backend came up so it looks like the health check is incorrect did something change could this explain why the endpoint test is ready is failing kubeflow kfctl cc zhenghuiwang lluunn
0
61,557
17,023,724,742
IssuesEvent
2021-07-03 03:30:22
tomhughes/trac-tickets
https://api.github.com/repos/tomhughes/trac-tickets
closed
[roads] mapnik doesn't render bridges & tunnels on preserved and disused railways
Component: mapnik Priority: minor Resolution: duplicate Type: defect
**[Submitted to the original trac issue database at 1.45pm, Tuesday, 21st June 2011]** mapnik doesn't render bridges & tunnels on preserved and disused railways bridge=yes, railway=preserved: http://www.openstreetmap.org/browse/way/25920545 tunnel=yes, railway=preserved: http://www.openstreetmap.org/browse/way/28433441 tunnel=yes, railway=disused: http://www.openstreetmap.org/browse/way/32730781 Probably same kind of fault as http://trac.openstreetmap.org/ticket/2767 Thanks for all your work, great job!
1.0
[roads] mapnik doesn't render bridges & tunnels on preserved and disused railways - **[Submitted to the original trac issue database at 1.45pm, Tuesday, 21st June 2011]** mapnik doesn't render bridges & tunnels on preserved and disused railways bridge=yes, railway=preserved: http://www.openstreetmap.org/browse/way/25920545 tunnel=yes, railway=preserved: http://www.openstreetmap.org/browse/way/28433441 tunnel=yes, railway=disused: http://www.openstreetmap.org/browse/way/32730781 Probably same kind of fault as http://trac.openstreetmap.org/ticket/2767 Thanks for all your work, great job!
defect
mapnik doesn t render bridges tunnels on preserved and disused railways mapnik doesn t render bridges tunnels on preserved and disused railways bridge yes railway preserved tunnel yes railway preserved tunnel yes railway disused probably same kind of fault as thanks for all your work great job
1
76,496
26,459,780,042
IssuesEvent
2023-01-16 16:33:48
zed-industries/feedback
https://api.github.com/repos/zed-industries/feedback
closed
Typescript server will not run or download
defect typescript language
### Check for existing issues - [X] Completed ### Describe the bug I opened a typescript project, opened a typescript file, and hit save - the server did not start up. I typed an error and hit save - the server did not come up or report an error. I deleted the language server, reopened the project, resaved a typescript file, and the server did not download. ### To reproduce - ### Expected behavior - ### Environment Zed 0.53.1 – /Applications/Zed.app macOS 12.4 architecture x86_64 ### If applicable, add mockups / screenshots to help explain present your vision of the feature _No response_ ### If applicable, attach your `~/Library/Logs/Zed/Zed.log` file to this issue ```log Caused by: 0: IO error: Connection reset by peer (os error 54) 1: Connection reset by peer (os error 54) 23:24:27 [INFO] set status on client 0: ConnectionLost 23:24:27 [INFO] set status on client 0: Reauthenticating 23:24:27 [INFO] set status on client 0: Reconnecting 23:24:27 [INFO] connected to rpc endpoint https://collab.zed.dev/rpc 23:24:28 [ERROR] not connected 23:24:28 [ERROR] not connected 23:24:28 [INFO] add connection to peer 23:24:28 [INFO] add_connection; 23:24:28 [INFO] set status to connected 53 23:24:28 [INFO] set status on client 0: Connected { connection_id: ConnectionId(53) } 23:28:46 [ERROR] No such file or directory (os error 2) 23:28:49 [WARN] incoming response: unknown request connection_id=53 message_id=11 responding_to=9 23:29:12 [INFO] open paths ["/Users/josephlyons/Desktop/move/zed/zed.dev"] 23:29:15 [ERROR] No such file or directory (os error 2) 23:30:53 [ERROR] No such file or directory (os error 2) 23:36:03 [ERROR] unexpected item event after pane was dropped 23:36:04 [WARN] incoming response: unknown request connection_id=53 message_id=24 responding_to=116 23:36:06 [INFO] open paths ["/Users/josephlyons/Library/Application Support/Zed/languages"] 23:36:08 [ERROR] No such file or directory (os error 2) ```
1.0
Typescript server will not run or download - ### Check for existing issues - [X] Completed ### Describe the bug I opened a typescript project, opened a typescript file, and hit save - the server did not start up. I typed an error and hit save - the server did not come up or report an error. I deleted the language server, reopened the project, resaved a typescript file, and the server did not download. ### To reproduce - ### Expected behavior - ### Environment Zed 0.53.1 – /Applications/Zed.app macOS 12.4 architecture x86_64 ### If applicable, add mockups / screenshots to help explain present your vision of the feature _No response_ ### If applicable, attach your `~/Library/Logs/Zed/Zed.log` file to this issue ```log Caused by: 0: IO error: Connection reset by peer (os error 54) 1: Connection reset by peer (os error 54) 23:24:27 [INFO] set status on client 0: ConnectionLost 23:24:27 [INFO] set status on client 0: Reauthenticating 23:24:27 [INFO] set status on client 0: Reconnecting 23:24:27 [INFO] connected to rpc endpoint https://collab.zed.dev/rpc 23:24:28 [ERROR] not connected 23:24:28 [ERROR] not connected 23:24:28 [INFO] add connection to peer 23:24:28 [INFO] add_connection; 23:24:28 [INFO] set status to connected 53 23:24:28 [INFO] set status on client 0: Connected { connection_id: ConnectionId(53) } 23:28:46 [ERROR] No such file or directory (os error 2) 23:28:49 [WARN] incoming response: unknown request connection_id=53 message_id=11 responding_to=9 23:29:12 [INFO] open paths ["/Users/josephlyons/Desktop/move/zed/zed.dev"] 23:29:15 [ERROR] No such file or directory (os error 2) 23:30:53 [ERROR] No such file or directory (os error 2) 23:36:03 [ERROR] unexpected item event after pane was dropped 23:36:04 [WARN] incoming response: unknown request connection_id=53 message_id=24 responding_to=116 23:36:06 [INFO] open paths ["/Users/josephlyons/Library/Application Support/Zed/languages"] 23:36:08 [ERROR] No such file or directory (os error 2) ```
defect
typescript server will not run or download check for existing issues completed describe the bug i opened a typescript project opened a typescript file and hit save the server did not start up i typed an error and hit save the server did not come up or report an error i deleted the language server reopened the project resaved a typescript file and the server did not download to reproduce expected behavior environment zed – applications zed app macos architecture if applicable add mockups screenshots to help explain present your vision of the feature no response if applicable attach your library logs zed zed log file to this issue log caused by io error connection reset by peer os error connection reset by peer os error set status on client connectionlost set status on client reauthenticating set status on client reconnecting connected to rpc endpoint not connected not connected add connection to peer add connection set status to connected set status on client connected connection id connectionid no such file or directory os error incoming response unknown request connection id message id responding to open paths no such file or directory os error no such file or directory os error unexpected item event after pane was dropped incoming response unknown request connection id message id responding to open paths no such file or directory os error
1
3,684
6,552,950,916
IssuesEvent
2017-09-05 20:25:26
facebook/hhvm
https://api.github.com/repos/facebook/hhvm
closed
Compatibility with PHP 7.1 nullable types
php7 incompatibility probably easy
This is more of a question than a bug report, but I think I might have found some regressions in HHVM's type system with regards to nullable types. I maintain a [mocking library](https://github.com/eloquent/phony) with support for HHVM >= 3.6 and PHP >= 5.3. In order to support newer language features, such as nullable types, I use feature detection where possible, rather than checking for hardcoded version numbers. I have a check for nullable type support that boils down to this: ```php $reporting = error_reporting(E_ERROR | E_COMPILE_ERROR); $result = false; try { $result = eval('function(?int $a){};return true;'); } catch (Throwable $e) { } catch (Exception $e) { // ignore } error_reporting($reporting); $isSupported = true === $result; var_dump($isSupported); ``` Which was working fine, or so I thought, until a user of the mocking library submitted a [bug report](https://github.com/eloquent/phony/pull/204) because HHVM was throwing fatal errors when running this code. I verified that the bug report was valid, and can reproduce it, but the strange thing is that it doesn't produce fatal errors for a couple of HHVM versions **in between** two versions that *do* produce fatals (see [this build](https://travis-ci.org/ezzatron/oauth2-client/builds/173144588) for details): ![screen shot 2016-11-04 at 12 40 13 pm](https://cloud.githubusercontent.com/assets/100152/19992694/e22dfd0e-a28b-11e6-99c2-3e951e3d984d.png) And even stranger, this code is also executed in the Travis build for the mocking framework itself, where it works for *all* versions of HHVM shown above. In addition, it behaves differently again when run under 3v4l.org: https://3v4l.org/R3ZcY So, there are a few questions that arise from the above issues: 1. What factors could be causing the differences I'm seeing here across ostensibly identical versions of HHVM? Is it perhaps a compilation flag that changes this behavior? 2. Why does support for this approach to feature detecting seem to get better around 3.9, but worse again with 3.15? Is there perhaps a regression somewhere? 3. What are the plans for HHVM with regards to supporting PHP7 type system features (including nullable types) in the future? Will HHVM eventually "just work" with PHP7 syntax? Any info or ideas appreciated. Thanks for reading!
True
Compatibility with PHP 7.1 nullable types - This is more of a question than a bug report, but I think I might have found some regressions in HHVM's type system with regards to nullable types. I maintain a [mocking library](https://github.com/eloquent/phony) with support for HHVM >= 3.6 and PHP >= 5.3. In order to support newer language features, such as nullable types, I use feature detection where possible, rather than checking for hardcoded version numbers. I have a check for nullable type support that boils down to this: ```php $reporting = error_reporting(E_ERROR | E_COMPILE_ERROR); $result = false; try { $result = eval('function(?int $a){};return true;'); } catch (Throwable $e) { } catch (Exception $e) { // ignore } error_reporting($reporting); $isSupported = true === $result; var_dump($isSupported); ``` Which was working fine, or so I thought, until a user of the mocking library submitted a [bug report](https://github.com/eloquent/phony/pull/204) because HHVM was throwing fatal errors when running this code. I verified that the bug report was valid, and can reproduce it, but the strange thing is that it doesn't produce fatal errors for a couple of HHVM versions **in between** two versions that *do* produce fatals (see [this build](https://travis-ci.org/ezzatron/oauth2-client/builds/173144588) for details): ![screen shot 2016-11-04 at 12 40 13 pm](https://cloud.githubusercontent.com/assets/100152/19992694/e22dfd0e-a28b-11e6-99c2-3e951e3d984d.png) And even stranger, this code is also executed in the Travis build for the mocking framework itself, where it works for *all* versions of HHVM shown above. In addition, it behaves differently again when run under 3v4l.org: https://3v4l.org/R3ZcY So, there are a few questions that arise from the above issues: 1. What factors could be causing the differences I'm seeing here across ostensibly identical versions of HHVM? Is it perhaps a compilation flag that changes this behavior? 2. Why does support for this approach to feature detecting seem to get better around 3.9, but worse again with 3.15? Is there perhaps a regression somewhere? 3. What are the plans for HHVM with regards to supporting PHP7 type system features (including nullable types) in the future? Will HHVM eventually "just work" with PHP7 syntax? Any info or ideas appreciated. Thanks for reading!
non_defect
compatibility with php nullable types this is more of a question than a bug report but i think i might have found some regressions in hhvm s type system with regards to nullable types i maintain a with support for hhvm and php in order to support newer language features such as nullable types i use feature detection where possible rather than checking for hardcoded version numbers i have a check for nullable type support that boils down to this php reporting error reporting e error e compile error result false try result eval function int a return true catch throwable e catch exception e ignore error reporting reporting issupported true result var dump issupported which was working fine or so i thought until a user of the mocking library submitted a because hhvm was throwing fatal errors when running this code i verified that the bug report was valid and can reproduce it but the strange thing is that it doesn t produce fatal errors for a couple of hhvm versions in between two versions that do produce fatals see for details and even stranger this code is also executed in the travis build for the mocking framework itself where it works for all versions of hhvm shown above in addition it behaves differently again when run under org so there are a few questions that arise from the above issues what factors could be causing the differences i m seeing here across ostensibly identical versions of hhvm is it perhaps a compilation flag that changes this behavior why does support for this approach to feature detecting seem to get better around but worse again with is there perhaps a regression somewhere what are the plans for hhvm with regards to supporting type system features including nullable types in the future will hhvm eventually just work with syntax any info or ideas appreciated thanks for reading
0
43,451
11,725,064,602
IssuesEvent
2020-03-10 12:14:05
vla7/akina
https://api.github.com/repos/vla7/akina
closed
Галерея Картинок
Priority-Medium Type-Defect auto-migrated
``` Предлагаю создать галерею недавно загруженных изображений ``` Original issue reported on code.google.com by `dima4...@gmail.com` on 7 Jun 2013 at 5:42
1.0
Галерея Картинок - ``` Предлагаю создать галерею недавно загруженных изображений ``` Original issue reported on code.google.com by `dima4...@gmail.com` on 7 Jun 2013 at 5:42
defect
галерея картинок предлагаю создать галерею недавно загруженных изображений original issue reported on code google com by gmail com on jun at
1
808,910
30,116,204,821
IssuesEvent
2023-06-30 11:44:53
momentum-mod/website
https://api.github.com/repos/momentum-mod/website
opened
Move backend bundling to esbuild
Size: Small Priority: Medium Type: Dev/Internal For: Backend
We're bundling the backend with webpack currently, whilst not overly slow, may as well switch to esbuild as we migrate the frontend (https://github.com/momentum-mod/website/issues/748) as well .
1.0
Move backend bundling to esbuild - We're bundling the backend with webpack currently, whilst not overly slow, may as well switch to esbuild as we migrate the frontend (https://github.com/momentum-mod/website/issues/748) as well .
non_defect
move backend bundling to esbuild we re bundling the backend with webpack currently whilst not overly slow may as well switch to esbuild as we migrate the frontend as well
0
29,567
5,722,228,544
IssuesEvent
2017-04-20 08:58:26
hazelcast/hazelcast-jet
https://api.github.com/repos/hazelcast/hazelcast-jet
closed
Replace `ReadFileStreamP` with batch file processor
defect
The current API tries to use `WatchService` for watching changes to files in a directory and stream them. However, it's very unreliable, and thus unusable beyond some may-be-working example. Stream-processing of log files cannot be reliably done using files. We need to replace it with a processor, that will process all files in a directory and then finish. This will handle the use case for batch processing of log files, for example.
1.0
Replace `ReadFileStreamP` with batch file processor - The current API tries to use `WatchService` for watching changes to files in a directory and stream them. However, it's very unreliable, and thus unusable beyond some may-be-working example. Stream-processing of log files cannot be reliably done using files. We need to replace it with a processor, that will process all files in a directory and then finish. This will handle the use case for batch processing of log files, for example.
defect
replace readfilestreamp with batch file processor the current api tries to use watchservice for watching changes to files in a directory and stream them however it s very unreliable and thus unusable beyond some may be working example stream processing of log files cannot be reliably done using files we need to replace it with a processor that will process all files in a directory and then finish this will handle the use case for batch processing of log files for example
1
46,740
13,055,967,983
IssuesEvent
2020-07-30 03:15:47
icecube-trac/tix2
https://api.github.com/repos/icecube-trac/tix2
opened
[g4-tankresponse] null pointer dereference (Trac #1797)
Incomplete Migration Migrated from Trac cmake defect
Migrated from https://code.icecube.wisc.edu/ticket/1797 ```json { "status": "closed", "changetime": "2016-07-28T18:37:37", "description": "found by static analyser http://software.icecube.wisc.edu/static_analysis/2016-07-26-030212-26135-1/report-a785e8.html#EndPath", "reporter": "kjmeagher", "cc": "", "resolution": "invalid", "_ts": "1469731057656184", "component": "cmake", "summary": "[g4-tankresponse] null pointer dereference", "priority": "normal", "keywords": "", "time": "2016-07-27T07:58:31", "milestone": "Long-Term Future", "owner": "jgonzalez", "type": "defect" } ```
1.0
[g4-tankresponse] null pointer dereference (Trac #1797) - Migrated from https://code.icecube.wisc.edu/ticket/1797 ```json { "status": "closed", "changetime": "2016-07-28T18:37:37", "description": "found by static analyser http://software.icecube.wisc.edu/static_analysis/2016-07-26-030212-26135-1/report-a785e8.html#EndPath", "reporter": "kjmeagher", "cc": "", "resolution": "invalid", "_ts": "1469731057656184", "component": "cmake", "summary": "[g4-tankresponse] null pointer dereference", "priority": "normal", "keywords": "", "time": "2016-07-27T07:58:31", "milestone": "Long-Term Future", "owner": "jgonzalez", "type": "defect" } ```
defect
null pointer dereference trac migrated from json status closed changetime description found by static analyser reporter kjmeagher cc resolution invalid ts component cmake summary null pointer dereference priority normal keywords time milestone long term future owner jgonzalez type defect
1
17,659
3,012,801,823
IssuesEvent
2015-07-29 02:45:15
yawlfoundation/yawl
https://api.github.com/repos/yawlfoundation/yawl
closed
Start condition label alignment
auto-migrated Component-Editor Priority-Low Type-Defect
``` Set the label of a start condition. ``` Original issue reported on code.google.com by `stephan....@googlemail.com` on 26 Sep 2008 at 3:26 Attachments: * [scond.jpg](https://storage.googleapis.com/google-code-attachments/yawl/issue-158/comment-0/scond.jpg)
1.0
Start condition label alignment - ``` Set the label of a start condition. ``` Original issue reported on code.google.com by `stephan....@googlemail.com` on 26 Sep 2008 at 3:26 Attachments: * [scond.jpg](https://storage.googleapis.com/google-code-attachments/yawl/issue-158/comment-0/scond.jpg)
defect
start condition label alignment set the label of a start condition original issue reported on code google com by stephan googlemail com on sep at attachments
1
78,204
27,369,220,501
IssuesEvent
2023-02-27 21:49:05
vector-im/element-web
https://api.github.com/repos/vector-im/element-web
opened
Clicking on the parent to a reply of a thread (but not a reply in thread) renders wrong (both) replies in the thread sidebar
T-Defect
### Steps to reproduce Send a message A Reply to the message A with reply B Reply to the reply B with reply C Have someone else Reply in Thread to the original message A Now there will be two messages with 'reply' preview chains Now when I click the reply preview chain the beginning, it shows the Thread replies, not the regular replies. For some reason, after I click on the Thread replies, then back to the regular replies, the sidebar renders correctly. Please excuse the frustrated language. With some help, threads can become Treasure <3 https://user-images.githubusercontent.com/32407/221689418-294a2d6e-5313-4744-8ebe-dc47fc8a62f9.mov ### Outcome #### What did you expect? Good question! My expectation was seeing the content in the reply preview when i click on it #### What happened instead? It showed replies from the Thread replies ### Operating system macOS ### Browser information Firefox 111.0 ### URL for webapp https://chat.woodbine.nyc ### Application version Element version: 1.11.23 Olm version: 3.2.12 ### Homeserver matrixdotorg/synapse:latest (1.77.0) ### Will you send logs? Yes
1.0
Clicking on the parent to a reply of a thread (but not a reply in thread) renders wrong (both) replies in the thread sidebar - ### Steps to reproduce Send a message A Reply to the message A with reply B Reply to the reply B with reply C Have someone else Reply in Thread to the original message A Now there will be two messages with 'reply' preview chains Now when I click the reply preview chain the beginning, it shows the Thread replies, not the regular replies. For some reason, after I click on the Thread replies, then back to the regular replies, the sidebar renders correctly. Please excuse the frustrated language. With some help, threads can become Treasure <3 https://user-images.githubusercontent.com/32407/221689418-294a2d6e-5313-4744-8ebe-dc47fc8a62f9.mov ### Outcome #### What did you expect? Good question! My expectation was seeing the content in the reply preview when i click on it #### What happened instead? It showed replies from the Thread replies ### Operating system macOS ### Browser information Firefox 111.0 ### URL for webapp https://chat.woodbine.nyc ### Application version Element version: 1.11.23 Olm version: 3.2.12 ### Homeserver matrixdotorg/synapse:latest (1.77.0) ### Will you send logs? Yes
defect
clicking on the parent to a reply of a thread but not a reply in thread renders wrong both replies in the thread sidebar steps to reproduce send a message a reply to the message a with reply b reply to the reply b with reply c have someone else reply in thread to the original message a now there will be two messages with reply preview chains now when i click the reply preview chain the beginning it shows the thread replies not the regular replies for some reason after i click on the thread replies then back to the regular replies the sidebar renders correctly please excuse the frustrated language with some help threads can become treasure outcome what did you expect good question my expectation was seeing the content in the reply preview when i click on it what happened instead it showed replies from the thread replies operating system macos browser information firefox url for webapp application version element version olm version homeserver matrixdotorg synapse latest will you send logs yes
1
126,633
26,887,531,243
IssuesEvent
2023-02-06 05:27:45
appsmithorg/appsmith
https://api.github.com/repos/appsmithorg/appsmith
closed
[Feature]: Add Feature Flag for new GSheets datasource flow
Backend Task Google Sheets BE Coders Pod Test Plan Approved Integrations Pod
### Is there an existing issue for this? - [X] I have searched the existing issues ### Summary When I add new gsheet datasource, I should continue to see the existing flow until feature flag is lifted, so that incomplete implementation is hidden from end user. https://www.notion.so/How-to-add-a-new-feature-flag-562904be9d224776b72b9127d77479cf ### Why should this be worked on? -
1.0
[Feature]: Add Feature Flag for new GSheets datasource flow - ### Is there an existing issue for this? - [X] I have searched the existing issues ### Summary When I add new gsheet datasource, I should continue to see the existing flow until feature flag is lifted, so that incomplete implementation is hidden from end user. https://www.notion.so/How-to-add-a-new-feature-flag-562904be9d224776b72b9127d77479cf ### Why should this be worked on? -
non_defect
add feature flag for new gsheets datasource flow is there an existing issue for this i have searched the existing issues summary when i add new gsheet datasource i should continue to see the existing flow until feature flag is lifted so that incomplete implementation is hidden from end user why should this be worked on
0
277,115
24,049,885,034
IssuesEvent
2022-09-16 11:49:41
ESMValGroup/ESMValTool
https://api.github.com/repos/ESMValGroup/ESMValTool
opened
Run a quick checksum MD5 or SHA or even a size check while downloading raw OBS data
enhancement observations testing
First time I downloaded industrial sized raw OBS data and was met with a few dud files at the end of the download - when the wget hiccuped a bit but didn't exit. We should rn a checksum after the download happened, or even something as simple as a size or ncdump -h check.
1.0
Run a quick checksum MD5 or SHA or even a size check while downloading raw OBS data - First time I downloaded industrial sized raw OBS data and was met with a few dud files at the end of the download - when the wget hiccuped a bit but didn't exit. We should rn a checksum after the download happened, or even something as simple as a size or ncdump -h check.
non_defect
run a quick checksum or sha or even a size check while downloading raw obs data first time i downloaded industrial sized raw obs data and was met with a few dud files at the end of the download when the wget hiccuped a bit but didn t exit we should rn a checksum after the download happened or even something as simple as a size or ncdump h check
0
149,828
5,729,141,215
IssuesEvent
2017-04-21 04:37:23
TrinityCore/TrinityCore
https://api.github.com/repos/TrinityCore/TrinityCore
closed
Focus Magic treated like a debuff now
Branch-3.3.5a Comp-Core Priority-Cosmetic Sub-Spells
Focus Magic treated like a debuff. At some point after March 23rd, 2017 the Focus Magic buff is acking like a debuff. The buff icon shows where debuffs topically appear. Updated on March 23rd 2017 and if was functioning. Recompiled from current sourse as of April 14th (TrinityCore rev. 4a40b8ed4358 2017-04-14 16:53:59 -0300 (3.3.5 branch) (Win64, Release, Static) and the buff changed behavior. Not sure if it has to do with this track number that addresses the same spell: [https://github.com/TrinityCore/TrinityCore/issues/14779] **Steps to reproduce the problem:** Simply cast the buff on a friendly party member. It will show as a debuff and the player cannot remove it just like a debuff. Branch 3.3.5, master not sure about other branches TrinityCore rev. 4a40b8ed4358 2017-04-14 16:53:59 -0300 (3.3.5 branch) (Win64, Release, Static TDB 335.62 Windows 10 64bit
1.0
Focus Magic treated like a debuff now - Focus Magic treated like a debuff. At some point after March 23rd, 2017 the Focus Magic buff is acking like a debuff. The buff icon shows where debuffs topically appear. Updated on March 23rd 2017 and if was functioning. Recompiled from current sourse as of April 14th (TrinityCore rev. 4a40b8ed4358 2017-04-14 16:53:59 -0300 (3.3.5 branch) (Win64, Release, Static) and the buff changed behavior. Not sure if it has to do with this track number that addresses the same spell: [https://github.com/TrinityCore/TrinityCore/issues/14779] **Steps to reproduce the problem:** Simply cast the buff on a friendly party member. It will show as a debuff and the player cannot remove it just like a debuff. Branch 3.3.5, master not sure about other branches TrinityCore rev. 4a40b8ed4358 2017-04-14 16:53:59 -0300 (3.3.5 branch) (Win64, Release, Static TDB 335.62 Windows 10 64bit
non_defect
focus magic treated like a debuff now focus magic treated like a debuff at some point after march the focus magic buff is acking like a debuff the buff icon shows where debuffs topically appear updated on march and if was functioning recompiled from current sourse as of april trinitycore rev branch release static and the buff changed behavior not sure if it has to do with this track number that addresses the same spell steps to reproduce the problem simply cast the buff on a friendly party member it will show as a debuff and the player cannot remove it just like a debuff branch master not sure about other branches trinitycore rev branch release static tdb windows
0
51,383
13,207,460,162
IssuesEvent
2020-08-14 23:11:15
icecube-trac/tix4
https://api.github.com/repos/icecube-trac/tix4
opened
tableio default converters have shared state (Trac #317)
Incomplete Migration Migrated from Trac booking defect
<details> <summary><em>Migrated from <a href="https://code.icecube.wisc.edu/projects/icecube/ticket/317">https://code.icecube.wisc.edu/projects/icecube/ticket/317</a>, reported by jvansantenand owned by jvansanten</em></summary> <p> ```json { "status": "closed", "changetime": "2012-05-30T20:06:59", "_ts": "1338408419000000", "description": "There is exactly one instance of the default converter for each type. This fails when the converters need object-specific state, e.g. when there are two I3MapStringDoubles to be booked with different keys", "reporter": "jvansanten", "cc": "", "resolution": "fixed", "time": "2011-10-30T19:49:32", "component": "booking", "summary": "tableio default converters have shared state", "priority": "normal", "keywords": "", "milestone": "", "owner": "jvansanten", "type": "defect" } ``` </p> </details>
1.0
tableio default converters have shared state (Trac #317) - <details> <summary><em>Migrated from <a href="https://code.icecube.wisc.edu/projects/icecube/ticket/317">https://code.icecube.wisc.edu/projects/icecube/ticket/317</a>, reported by jvansantenand owned by jvansanten</em></summary> <p> ```json { "status": "closed", "changetime": "2012-05-30T20:06:59", "_ts": "1338408419000000", "description": "There is exactly one instance of the default converter for each type. This fails when the converters need object-specific state, e.g. when there are two I3MapStringDoubles to be booked with different keys", "reporter": "jvansanten", "cc": "", "resolution": "fixed", "time": "2011-10-30T19:49:32", "component": "booking", "summary": "tableio default converters have shared state", "priority": "normal", "keywords": "", "milestone": "", "owner": "jvansanten", "type": "defect" } ``` </p> </details>
defect
tableio default converters have shared state trac migrated from json status closed changetime ts description there is exactly one instance of the default converter for each type this fails when the converters need object specific state e g when there are two to be booked with different keys reporter jvansanten cc resolution fixed time component booking summary tableio default converters have shared state priority normal keywords milestone owner jvansanten type defect
1
48,043
13,067,411,452
IssuesEvent
2020-07-31 00:22:06
icecube-trac/tix2
https://api.github.com/repos/icecube-trac/tix2
closed
[I3_PORTS] CentOS7 freetype error (Trac #1700)
Migrated from Trac defect tools/ports
On CentOS7, I can't install Geant4 or Root from ports because of a freetype error: ```text g++ -pipe -m64 -Wshadow -Wall -W -Woverloaded-virtual -fPIC -Iinclude -pthread -Wno-deprecated-declarations -I. -I/cvmfs/icecube.opensciencegrid.org/py2-v2/RHEL_7_x86_64/i3ports/var/db/dports/build/file._cvmfs_icecube.opensciencegrid.org_py2-v2_RHEL_7_x86_64_i3ports_var_db_dports_sources_rsync.code.icecube.wisc.edu_icecube-tools-ports_science_root_5.34.18/work/root/cint/cint/inc -o misc/memstat/src/G__MemStat.o -c misc/memstat/src/G__MemStat.cxx In file included from asfont.c:67:0: ../../../../graf2d/freetype/src/freetype-2.3.12/include/freetype/freetype.h:21:2: error: #error "`ft2build.h' hasn't been included yet!" #error "`ft2build.h' hasn't been included yet!" ^ ../../../../graf2d/freetype/src/freetype-2.3.12/include/freetype/freetype.h:22:2: error: #error "Please always use macros to include FreeType header files." #error "Please always use macros to include FreeType header files." ^ ../../../../graf2d/freetype/src/freetype-2.3.12/include/freetype/freetype.h:23:2: error: #error "Example:" #error "Example:" ^ ../../../../graf2d/freetype/src/freetype-2.3.12/include/freetype/freetype.h:24:2: error: #error " #include <ft2build.h>" #error " #include <ft2build.h>" ^ ../../../../graf2d/freetype/src/freetype-2.3.12/include/freetype/freetype.h:25:2: error: #error " #include FT_FREETYPE_H" #error " #include FT_FREETYPE_H" ^ make[1]: *** [asfont.o] Error 1 ``` Note that the header in question is available at `/usr/include/ft2build.h` Migrated from https://code.icecube.wisc.edu/ticket/1700 ```json { "status": "closed", "changetime": "2019-02-13T14:12:47", "description": "On CentOS7, I can't install Geant4 or Root from ports because of a freetype error:\n\n{{{\ng++ -pipe -m64 -Wshadow -Wall -W -Woverloaded-virtual -fPIC -Iinclude -pthread -Wno-deprecated-declarations -I. -I/cvmfs/icecube.opensciencegrid.org/py2-v2/RHEL_7_x86_64/i3ports/var/db/dports/build/file._cvmfs_icecube.opensciencegrid.org_py2-v2_RHEL_7_x86_64_i3ports_var_db_dports_sources_rsync.code.icecube.wisc.edu_icecube-tools-ports_science_root_5.34.18/work/root/cint/cint/inc -o misc/memstat/src/G__MemStat.o -c misc/memstat/src/G__MemStat.cxx\nIn file included from asfont.c:67:0:\n../../../../graf2d/freetype/src/freetype-2.3.12/include/freetype/freetype.h:21:2: error: #error \"`ft2build.h' hasn't been included yet!\"\n #error \"`ft2build.h' hasn't been included yet!\"\n ^\n../../../../graf2d/freetype/src/freetype-2.3.12/include/freetype/freetype.h:22:2: error: #error \"Please always use macros to include FreeType header files.\"\n #error \"Please always use macros to include FreeType header files.\"\n ^\n../../../../graf2d/freetype/src/freetype-2.3.12/include/freetype/freetype.h:23:2: error: #error \"Example:\"\n #error \"Example:\"\n ^\n../../../../graf2d/freetype/src/freetype-2.3.12/include/freetype/freetype.h:24:2: error: #error \" #include <ft2build.h>\"\n #error \" #include <ft2build.h>\"\n ^\n../../../../graf2d/freetype/src/freetype-2.3.12/include/freetype/freetype.h:25:2: error: #error \" #include FT_FREETYPE_H\"\n #error \" #include FT_FREETYPE_H\"\n ^\nmake[1]: *** [asfont.o] Error 1\n}}}\n\nNote that the header in question is available at `/usr/include/ft2build.h`", "reporter": "david.schultz", "cc": "", "resolution": "fixed", "_ts": "1550067167842669", "component": "tools/ports", "summary": "[I3_PORTS] CentOS7 freetype error", "priority": "major", "keywords": "", "time": "2016-05-12T16:23:08", "milestone": "", "owner": "nega", "type": "defect" } ```
1.0
[I3_PORTS] CentOS7 freetype error (Trac #1700) - On CentOS7, I can't install Geant4 or Root from ports because of a freetype error: ```text g++ -pipe -m64 -Wshadow -Wall -W -Woverloaded-virtual -fPIC -Iinclude -pthread -Wno-deprecated-declarations -I. -I/cvmfs/icecube.opensciencegrid.org/py2-v2/RHEL_7_x86_64/i3ports/var/db/dports/build/file._cvmfs_icecube.opensciencegrid.org_py2-v2_RHEL_7_x86_64_i3ports_var_db_dports_sources_rsync.code.icecube.wisc.edu_icecube-tools-ports_science_root_5.34.18/work/root/cint/cint/inc -o misc/memstat/src/G__MemStat.o -c misc/memstat/src/G__MemStat.cxx In file included from asfont.c:67:0: ../../../../graf2d/freetype/src/freetype-2.3.12/include/freetype/freetype.h:21:2: error: #error "`ft2build.h' hasn't been included yet!" #error "`ft2build.h' hasn't been included yet!" ^ ../../../../graf2d/freetype/src/freetype-2.3.12/include/freetype/freetype.h:22:2: error: #error "Please always use macros to include FreeType header files." #error "Please always use macros to include FreeType header files." ^ ../../../../graf2d/freetype/src/freetype-2.3.12/include/freetype/freetype.h:23:2: error: #error "Example:" #error "Example:" ^ ../../../../graf2d/freetype/src/freetype-2.3.12/include/freetype/freetype.h:24:2: error: #error " #include <ft2build.h>" #error " #include <ft2build.h>" ^ ../../../../graf2d/freetype/src/freetype-2.3.12/include/freetype/freetype.h:25:2: error: #error " #include FT_FREETYPE_H" #error " #include FT_FREETYPE_H" ^ make[1]: *** [asfont.o] Error 1 ``` Note that the header in question is available at `/usr/include/ft2build.h` Migrated from https://code.icecube.wisc.edu/ticket/1700 ```json { "status": "closed", "changetime": "2019-02-13T14:12:47", "description": "On CentOS7, I can't install Geant4 or Root from ports because of a freetype error:\n\n{{{\ng++ -pipe -m64 -Wshadow -Wall -W -Woverloaded-virtual -fPIC -Iinclude -pthread -Wno-deprecated-declarations -I. -I/cvmfs/icecube.opensciencegrid.org/py2-v2/RHEL_7_x86_64/i3ports/var/db/dports/build/file._cvmfs_icecube.opensciencegrid.org_py2-v2_RHEL_7_x86_64_i3ports_var_db_dports_sources_rsync.code.icecube.wisc.edu_icecube-tools-ports_science_root_5.34.18/work/root/cint/cint/inc -o misc/memstat/src/G__MemStat.o -c misc/memstat/src/G__MemStat.cxx\nIn file included from asfont.c:67:0:\n../../../../graf2d/freetype/src/freetype-2.3.12/include/freetype/freetype.h:21:2: error: #error \"`ft2build.h' hasn't been included yet!\"\n #error \"`ft2build.h' hasn't been included yet!\"\n ^\n../../../../graf2d/freetype/src/freetype-2.3.12/include/freetype/freetype.h:22:2: error: #error \"Please always use macros to include FreeType header files.\"\n #error \"Please always use macros to include FreeType header files.\"\n ^\n../../../../graf2d/freetype/src/freetype-2.3.12/include/freetype/freetype.h:23:2: error: #error \"Example:\"\n #error \"Example:\"\n ^\n../../../../graf2d/freetype/src/freetype-2.3.12/include/freetype/freetype.h:24:2: error: #error \" #include <ft2build.h>\"\n #error \" #include <ft2build.h>\"\n ^\n../../../../graf2d/freetype/src/freetype-2.3.12/include/freetype/freetype.h:25:2: error: #error \" #include FT_FREETYPE_H\"\n #error \" #include FT_FREETYPE_H\"\n ^\nmake[1]: *** [asfont.o] Error 1\n}}}\n\nNote that the header in question is available at `/usr/include/ft2build.h`", "reporter": "david.schultz", "cc": "", "resolution": "fixed", "_ts": "1550067167842669", "component": "tools/ports", "summary": "[I3_PORTS] CentOS7 freetype error", "priority": "major", "keywords": "", "time": "2016-05-12T16:23:08", "milestone": "", "owner": "nega", "type": "defect" } ```
defect
freetype error trac on i can t install or root from ports because of a freetype error text g pipe wshadow wall w woverloaded virtual fpic iinclude pthread wno deprecated declarations i i cvmfs icecube opensciencegrid org rhel var db dports build file cvmfs icecube opensciencegrid org rhel var db dports sources rsync code icecube wisc edu icecube tools ports science root work root cint cint inc o misc memstat src g memstat o c misc memstat src g memstat cxx in file included from asfont c freetype src freetype include freetype freetype h error error h hasn t been included yet error h hasn t been included yet freetype src freetype include freetype freetype h error error please always use macros to include freetype header files error please always use macros to include freetype header files freetype src freetype include freetype freetype h error error example error example freetype src freetype include freetype freetype h error error include error include freetype src freetype include freetype freetype h error error include ft freetype h error include ft freetype h make error note that the header in question is available at usr include h migrated from json status closed changetime description on i can t install or root from ports because of a freetype error n n ng pipe wshadow wall w woverloaded virtual fpic iinclude pthread wno deprecated declarations i i cvmfs icecube opensciencegrid org rhel var db dports build file cvmfs icecube opensciencegrid org rhel var db dports sources rsync code icecube wisc edu icecube tools ports science root work root cint cint inc o misc memstat src g memstat o c misc memstat src g memstat cxx nin file included from asfont c n freetype src freetype include freetype freetype h error error h hasn t been included yet n error h hasn t been included yet n n freetype src freetype include freetype freetype h error error please always use macros to include freetype header files n error please always use macros to include freetype header files n n freetype src freetype include freetype freetype h error error example n error example n n freetype src freetype include freetype freetype h error error include n error include n n freetype src freetype include freetype freetype h error error include ft freetype h n error include ft freetype h n nmake error n n nnote that the header in question is available at usr include h reporter david schultz cc resolution fixed ts component tools ports summary freetype error priority major keywords time milestone owner nega type defect
1
34,813
7,460,624,029
IssuesEvent
2018-03-30 20:35:16
kerdokullamae/test_koik_issued
https://api.github.com/repos/kerdokullamae/test_koik_issued
closed
KÜ detailandmete lehel toiming "Kirjeldus RDF failina" krässib
C: AIS P: highest R: fixed T: defect
**Reported by aadikaljuvee on 17 Mar 2017 08:23 UTC** http://crowdsourcing.www.dev-ais-web.arhiiv.ee/et/description_unit/rdfDownload/124020087069 404 - Not Found The requested URL /et/description_unit/rdfDownload/124020087069 was not found on this server.
1.0
KÜ detailandmete lehel toiming "Kirjeldus RDF failina" krässib - **Reported by aadikaljuvee on 17 Mar 2017 08:23 UTC** http://crowdsourcing.www.dev-ais-web.arhiiv.ee/et/description_unit/rdfDownload/124020087069 404 - Not Found The requested URL /et/description_unit/rdfDownload/124020087069 was not found on this server.
defect
kü detailandmete lehel toiming kirjeldus rdf failina krässib reported by aadikaljuvee on mar utc not found the requested url et description unit rdfdownload was not found on this server
1
46,155
13,055,859,871
IssuesEvent
2020-07-30 02:57:02
icecube-trac/tix2
https://api.github.com/repos/icecube-trac/tix2
opened
Python I3File should mix in non-native keys (Trac #656)
Incomplete Migration Migrated from Trac dataio defect
Migrated from https://code.icecube.wisc.edu/ticket/656 ```json { "status": "closed", "changetime": "2011-10-25T14:30:56", "description": "I3Frames emitted by I3Modules contain the non-native keys (e.g. GCDQ) found in all previous frames. It would be convenient if the Python I3File had the same functionality.\n\nA hobo implementation can be found here:\n\nhttp://code.icecube.wisc.edu/projects/icecube/browser/sandbox/python-event-viewer/trunk/python/hobomux.py?rev=79730", "reporter": "jvansanten", "cc": "", "resolution": "fixed", "_ts": "1319553056000000", "component": "dataio", "summary": "Python I3File should mix in non-native keys", "priority": "normal", "keywords": "", "time": "2011-10-25T14:27:41", "milestone": "", "owner": "", "type": "defect" } ```
1.0
Python I3File should mix in non-native keys (Trac #656) - Migrated from https://code.icecube.wisc.edu/ticket/656 ```json { "status": "closed", "changetime": "2011-10-25T14:30:56", "description": "I3Frames emitted by I3Modules contain the non-native keys (e.g. GCDQ) found in all previous frames. It would be convenient if the Python I3File had the same functionality.\n\nA hobo implementation can be found here:\n\nhttp://code.icecube.wisc.edu/projects/icecube/browser/sandbox/python-event-viewer/trunk/python/hobomux.py?rev=79730", "reporter": "jvansanten", "cc": "", "resolution": "fixed", "_ts": "1319553056000000", "component": "dataio", "summary": "Python I3File should mix in non-native keys", "priority": "normal", "keywords": "", "time": "2011-10-25T14:27:41", "milestone": "", "owner": "", "type": "defect" } ```
defect
python should mix in non native keys trac migrated from json status closed changetime description emitted by contain the non native keys e g gcdq found in all previous frames it would be convenient if the python had the same functionality n na hobo implementation can be found here n n reporter jvansanten cc resolution fixed ts component dataio summary python should mix in non native keys priority normal keywords time milestone owner type defect
1
27,346
4,968,434,513
IssuesEvent
2016-12-05 09:52:37
TNGSB/eWallet
https://api.github.com/repos/TNGSB/eWallet
closed
eWallet_MobileApp/Portal_Unsuccessful Reload Transaction # Live_026
ABL Defect - Showstopper (Sev-1) Live Environment
Whenever there is a Reload transaction, iPay will send a response message on the fund payment to both the Mobile App as well as the eWallet Back end system, However the Mobile App (Front end) has failed to receive this message from iPay and hence has not sent a further message to eWallet Back end system to process the transaction and therefore there was a Unsuccessful Reload Transaction, Investigation is still in progress and fix is yet pending.
1.0
eWallet_MobileApp/Portal_Unsuccessful Reload Transaction # Live_026 - Whenever there is a Reload transaction, iPay will send a response message on the fund payment to both the Mobile App as well as the eWallet Back end system, However the Mobile App (Front end) has failed to receive this message from iPay and hence has not sent a further message to eWallet Back end system to process the transaction and therefore there was a Unsuccessful Reload Transaction, Investigation is still in progress and fix is yet pending.
defect
ewallet mobileapp portal unsuccessful reload transaction live whenever there is a reload transaction ipay will send a response message on the fund payment to both the mobile app as well as the ewallet back end system however the mobile app front end has failed to receive this message from ipay and hence has not sent a further message to ewallet back end system to process the transaction and therefore there was a unsuccessful reload transaction investigation is still in progress and fix is yet pending
1
4,348
2,610,092,134
IssuesEvent
2015-02-26 18:27:49
chrsmith/dsdsdaadf
https://api.github.com/repos/chrsmith/dsdsdaadf
opened
深圳粉刺怎么消除
auto-migrated Priority-Medium Type-Defect
``` 深圳粉刺怎么消除【深圳韩方科颜全国热线400-869-1818,24小时 QQ4008691818】深圳韩方科颜专业祛痘连锁机构,机构以韩国秘�� �——韩方科颜这一国妆准字号治疗型权威,祛痘佳品,韩方� ��颜专业祛痘连锁机构,采用韩国秘方配合专业“不反弹”健 康祛痘技术并结合先进“先进豪华彩光”仪,开创国内专业�� �疗粉刺、痤疮签约包治先河,成功消除了许多顾客脸上的痘� ��。 ``` ----- Original issue reported on code.google.com by `szft...@163.com` on 14 May 2014 at 7:54
1.0
深圳粉刺怎么消除 - ``` 深圳粉刺怎么消除【深圳韩方科颜全国热线400-869-1818,24小时 QQ4008691818】深圳韩方科颜专业祛痘连锁机构,机构以韩国秘�� �——韩方科颜这一国妆准字号治疗型权威,祛痘佳品,韩方� ��颜专业祛痘连锁机构,采用韩国秘方配合专业“不反弹”健 康祛痘技术并结合先进“先进豪华彩光”仪,开创国内专业�� �疗粉刺、痤疮签约包治先河,成功消除了许多顾客脸上的痘� ��。 ``` ----- Original issue reported on code.google.com by `szft...@163.com` on 14 May 2014 at 7:54
defect
深圳粉刺怎么消除 深圳粉刺怎么消除【 , 】深圳韩方科颜专业祛痘连锁机构,机构以韩国秘�� �——韩方科颜这一国妆准字号治疗型权威,祛痘佳品,韩方� ��颜专业祛痘连锁机构,采用韩国秘方配合专业“不反弹”健 康祛痘技术并结合先进“先进豪华彩光”仪,开创国内专业�� �疗粉刺、痤疮签约包治先河,成功消除了许多顾客脸上的痘� ��。 original issue reported on code google com by szft com on may at
1
380,993
11,271,544,987
IssuesEvent
2020-01-14 13:16:46
grafana/grafana
https://api.github.com/repos/grafana/grafana
closed
Grafana pop-up annotations instead of values on iPad when I touch a graph
area/panel/graph priority/important-soon type/bug
**What happened**: I'm trying to use Grafana on my iPad however when I touch a graph, I cannot see the values. Values show instantly for a sec then "annotation" bar pops-up. **What you expected to happen**: I expect grafana to show the values when I touch on a graph. **How to reproduce it (as minimally and precisely as possible)**: Go to a grafana bar on an iPad and touch a graph. **Anything else we need to know?**: Could that be a CSS issue? **Environment**: - Grafana version: 6.4.1 - Data source type & version: Influxdb - OS Grafana is installed on: Debian 9 - User OS & Browser: Safari + iPad iOS13 - Grafana plugins: No additional plugins. - Others:
1.0
Grafana pop-up annotations instead of values on iPad when I touch a graph - **What happened**: I'm trying to use Grafana on my iPad however when I touch a graph, I cannot see the values. Values show instantly for a sec then "annotation" bar pops-up. **What you expected to happen**: I expect grafana to show the values when I touch on a graph. **How to reproduce it (as minimally and precisely as possible)**: Go to a grafana bar on an iPad and touch a graph. **Anything else we need to know?**: Could that be a CSS issue? **Environment**: - Grafana version: 6.4.1 - Data source type & version: Influxdb - OS Grafana is installed on: Debian 9 - User OS & Browser: Safari + iPad iOS13 - Grafana plugins: No additional plugins. - Others:
non_defect
grafana pop up annotations instead of values on ipad when i touch a graph what happened i m trying to use grafana on my ipad however when i touch a graph i cannot see the values values show instantly for a sec then annotation bar pops up what you expected to happen i expect grafana to show the values when i touch on a graph how to reproduce it as minimally and precisely as possible go to a grafana bar on an ipad and touch a graph anything else we need to know could that be a css issue environment grafana version data source type version influxdb os grafana is installed on debian user os browser safari ipad grafana plugins no additional plugins others
0
30,489
6,134,949,927
IssuesEvent
2017-06-26 02:43:58
larcenists/larceny
https://api.github.com/repos/larcenists/larceny
closed
exact-integer-sqrt doesn't work with large arguments in Petit Larceny
bug P: minor R: fixed T: defect
``` Petit Larceny v1.1a2 (precise:Linux) > (exact-integer-sqrt #e1e20) Error: bytevector-like-set!: illegal third argument: 6027 cannot be stored in a bytevector-like ```
1.0
exact-integer-sqrt doesn't work with large arguments in Petit Larceny - ``` Petit Larceny v1.1a2 (precise:Linux) > (exact-integer-sqrt #e1e20) Error: bytevector-like-set!: illegal third argument: 6027 cannot be stored in a bytevector-like ```
defect
exact integer sqrt doesn t work with large arguments in petit larceny petit larceny precise linux exact integer sqrt error bytevector like set illegal third argument cannot be stored in a bytevector like
1
809,055
30,122,845,046
IssuesEvent
2023-06-30 16:37:26
Three-s-A-Crowd-Games/Gerda
https://api.github.com/repos/Three-s-A-Crowd-Games/Gerda
opened
overlapping rooms
bug low priority
some predefined rooms can overlap and cause chaos. not 100% sure but I think I saw an amboss on top of a trapdorr at the medianight once
1.0
overlapping rooms - some predefined rooms can overlap and cause chaos. not 100% sure but I think I saw an amboss on top of a trapdorr at the medianight once
non_defect
overlapping rooms some predefined rooms can overlap and cause chaos not sure but i think i saw an amboss on top of a trapdorr at the medianight once
0
70,532
23,221,792,994
IssuesEvent
2022-08-02 18:58:37
vector-im/element-android
https://api.github.com/repos/vector-im/element-android
opened
"No network" error when attempting to connect to a local, non-public synapse server via HTTP
T-Defect
### Steps to reproduce Basically, having the same issue noted in this error report below from a year ago. I would really like to have a way to work around this error. Being local only homeserver, I would like to avoid having to make SSL certs for a single user at this time. I have been able to link all other devices that are on iOS without issue. I receive the same "No Network" error, but no further information. The local network and VPN work flawlessly otherwise. My server address is: 'http://synapse.lan:8008' I immediately receive a No Network error on Anroid only, exactly the same issue noted in the report below. Thank you in advance. https://github.com/vector-im/element-android/issues/3280#issue-876271603 ### Outcome #### What did you expect to happen? _Similar to iOS, Windows and Linux applications, connection to server established, and moving to a login screen._ #### What happened instead? I receive a "No network" error with no further error codes or reasoning. Rest of network connections aside from element-android work flawlessly. ### Your phone model Samsung S9 ### Operating system version Android 12 ### Application version and app store element-android 1.4.27 ### Homeserver Synapse, local-only, non-federated, non-public facing via http ### Will you send logs? No ### Are you willing to provide a PR? No
1.0
"No network" error when attempting to connect to a local, non-public synapse server via HTTP - ### Steps to reproduce Basically, having the same issue noted in this error report below from a year ago. I would really like to have a way to work around this error. Being local only homeserver, I would like to avoid having to make SSL certs for a single user at this time. I have been able to link all other devices that are on iOS without issue. I receive the same "No Network" error, but no further information. The local network and VPN work flawlessly otherwise. My server address is: 'http://synapse.lan:8008' I immediately receive a No Network error on Anroid only, exactly the same issue noted in the report below. Thank you in advance. https://github.com/vector-im/element-android/issues/3280#issue-876271603 ### Outcome #### What did you expect to happen? _Similar to iOS, Windows and Linux applications, connection to server established, and moving to a login screen._ #### What happened instead? I receive a "No network" error with no further error codes or reasoning. Rest of network connections aside from element-android work flawlessly. ### Your phone model Samsung S9 ### Operating system version Android 12 ### Application version and app store element-android 1.4.27 ### Homeserver Synapse, local-only, non-federated, non-public facing via http ### Will you send logs? No ### Are you willing to provide a PR? No
defect
no network error when attempting to connect to a local non public synapse server via http steps to reproduce basically having the same issue noted in this error report below from a year ago i would really like to have a way to work around this error being local only homeserver i would like to avoid having to make ssl certs for a single user at this time i have been able to link all other devices that are on ios without issue i receive the same no network error but no further information the local network and vpn work flawlessly otherwise my server address is i immediately receive a no network error on anroid only exactly the same issue noted in the report below thank you in advance outcome what did you expect to happen similar to ios windows and linux applications connection to server established and moving to a login screen what happened instead i receive a no network error with no further error codes or reasoning rest of network connections aside from element android work flawlessly your phone model samsung operating system version android application version and app store element android homeserver synapse local only non federated non public facing via http will you send logs no are you willing to provide a pr no
1
153,181
13,496,098,136
IssuesEvent
2020-09-12 02:08:30
filecoin-project/sentinel
https://api.github.com/repos/filecoin-project/sentinel
closed
Sentinel architecture diagram
documentation
It would be great to have a high level architecture diagram for the components that make up sentinel. From what I can see so far, it's something like this: ![sentinel](https://user-images.githubusercontent.com/58871/92404078-a48f5a00-f12a-11ea-8987-8b5a42091ad2.png)
1.0
Sentinel architecture diagram - It would be great to have a high level architecture diagram for the components that make up sentinel. From what I can see so far, it's something like this: ![sentinel](https://user-images.githubusercontent.com/58871/92404078-a48f5a00-f12a-11ea-8987-8b5a42091ad2.png)
non_defect
sentinel architecture diagram it would be great to have a high level architecture diagram for the components that make up sentinel from what i can see so far it s something like this
0
44,254
17,934,511,475
IssuesEvent
2021-09-10 13:46:23
hashicorp/terraform-provider-aws
https://api.github.com/repos/hashicorp/terraform-provider-aws
closed
Support for resizing prefix lists
enhancement service/ec2
<!--- Please keep this note for the community ---> ### Community Note * Please vote on this issue by adding a 👍 [reaction](https://blog.github.com/2016-03-10-add-reactions-to-pull-requests-issues-and-comments/) to the original issue to help the community and maintainers prioritize this request * Please do not leave "+1" or other comments that do not add relevant new information or questions, they generate extra noise for issue followers and do not help prioritize the request * If you are interested in working on this issue or have submitted a pull request, please leave a comment <!--- Thank you for keeping this note for the community ---> ### Description https://aws.amazon.com/about-aws/whats-new/2021/08/amazon-vpc-resize-prefix-list/ says that VPC prefix lists can now be resized without destroy and re-creating them. ### New or Affected Resource(s) * aws_ec2_managed_prefix_list ### Potential Terraform Configuration Perhaps the `max_entries` parameter could be removed entirely; if the AWS API still requires it, it could be computed on the fly based on the length of the list. Or it could still be required by Terraform, but just allow for a change without destroying and re-creating the list; but if it could be made optional, that'd be a nice feature. ### References * https://aws.amazon.com/about-aws/whats-new/2021/08/amazon-vpc-resize-prefix-list/
1.0
Support for resizing prefix lists - <!--- Please keep this note for the community ---> ### Community Note * Please vote on this issue by adding a 👍 [reaction](https://blog.github.com/2016-03-10-add-reactions-to-pull-requests-issues-and-comments/) to the original issue to help the community and maintainers prioritize this request * Please do not leave "+1" or other comments that do not add relevant new information or questions, they generate extra noise for issue followers and do not help prioritize the request * If you are interested in working on this issue or have submitted a pull request, please leave a comment <!--- Thank you for keeping this note for the community ---> ### Description https://aws.amazon.com/about-aws/whats-new/2021/08/amazon-vpc-resize-prefix-list/ says that VPC prefix lists can now be resized without destroy and re-creating them. ### New or Affected Resource(s) * aws_ec2_managed_prefix_list ### Potential Terraform Configuration Perhaps the `max_entries` parameter could be removed entirely; if the AWS API still requires it, it could be computed on the fly based on the length of the list. Or it could still be required by Terraform, but just allow for a change without destroying and re-creating the list; but if it could be made optional, that'd be a nice feature. ### References * https://aws.amazon.com/about-aws/whats-new/2021/08/amazon-vpc-resize-prefix-list/
non_defect
support for resizing prefix lists community note please vote on this issue by adding a 👍 to the original issue to help the community and maintainers prioritize this request please do not leave or other comments that do not add relevant new information or questions they generate extra noise for issue followers and do not help prioritize the request if you are interested in working on this issue or have submitted a pull request please leave a comment description says that vpc prefix lists can now be resized without destroy and re creating them new or affected resource s aws managed prefix list potential terraform configuration perhaps the max entries parameter could be removed entirely if the aws api still requires it it could be computed on the fly based on the length of the list or it could still be required by terraform but just allow for a change without destroying and re creating the list but if it could be made optional that d be a nice feature references
0
42,519
17,173,361,893
IssuesEvent
2021-07-15 08:22:34
PrestaShop/PrestaShop
https://api.github.com/repos/PrestaShop/PrestaShop
closed
Module without description in webservice ressource break ressources listing on /api/
1.7.6.4 Bug Modules NMI WS Webservice
<!-- **************************** DO NOT disclose security issues here, contact security@prestashop.com instead! **************************** --> #### Describe the bug If installed modules use the hook `hookAddWebserviceResources` without providing "description" the list on the root /api/ path fails with `Undefined index: description` errors. For instance: ```php /** * @param array $params * @return array */ public function hookAddWebserviceResources($params) { return array( 'colissimo_custom_products' => array( 'class' => 'ColissimoCustomProduct', 'forbidden_method' => array('HEAD', 'POST', 'PUT', 'DELETE'), ), 'colissimo_custom_categories' => array( 'class' => 'ColissimoCustomCategory', 'forbidden_method' => array('HEAD', 'POST', 'PUT', 'DELETE'), ), 'colissimo_ace' => array( 'class' => 'ColissimoACE', 'forbidden_method' => array('GET', 'HEAD', 'PUT', 'DELETE'), ), ); } ``` The error is raised [here](https://github.com/PrestaShop/PrestaShop/blob/57894f9e9acb1f75088ec4ae63aaf6c468eac4b5/classes/webservice/WebserviceOutputBuilder.php#L331) #### Expected behavior Being able to list webservice ressources even if no description as been provided. #### Steps to Reproduce Steps to reproduce the behavior: 1. Install colissimo offical module for instance 2. Create a webservice API key 3. Open 'https://yourawesomeshop.com/api/' 4. See error ![image](https://user-images.githubusercontent.com/4567538/123407152-ec914780-d5ab-11eb-8908-1e671c127f49.png) #### Additional information * PrestaShop version: 1.7.6.4 * PHP version: 7.2
1.0
Module without description in webservice ressource break ressources listing on /api/ - <!-- **************************** DO NOT disclose security issues here, contact security@prestashop.com instead! **************************** --> #### Describe the bug If installed modules use the hook `hookAddWebserviceResources` without providing "description" the list on the root /api/ path fails with `Undefined index: description` errors. For instance: ```php /** * @param array $params * @return array */ public function hookAddWebserviceResources($params) { return array( 'colissimo_custom_products' => array( 'class' => 'ColissimoCustomProduct', 'forbidden_method' => array('HEAD', 'POST', 'PUT', 'DELETE'), ), 'colissimo_custom_categories' => array( 'class' => 'ColissimoCustomCategory', 'forbidden_method' => array('HEAD', 'POST', 'PUT', 'DELETE'), ), 'colissimo_ace' => array( 'class' => 'ColissimoACE', 'forbidden_method' => array('GET', 'HEAD', 'PUT', 'DELETE'), ), ); } ``` The error is raised [here](https://github.com/PrestaShop/PrestaShop/blob/57894f9e9acb1f75088ec4ae63aaf6c468eac4b5/classes/webservice/WebserviceOutputBuilder.php#L331) #### Expected behavior Being able to list webservice ressources even if no description as been provided. #### Steps to Reproduce Steps to reproduce the behavior: 1. Install colissimo offical module for instance 2. Create a webservice API key 3. Open 'https://yourawesomeshop.com/api/' 4. See error ![image](https://user-images.githubusercontent.com/4567538/123407152-ec914780-d5ab-11eb-8908-1e671c127f49.png) #### Additional information * PrestaShop version: 1.7.6.4 * PHP version: 7.2
non_defect
module without description in webservice ressource break ressources listing on api do not disclose security issues here contact security prestashop com instead describe the bug if installed modules use the hook hookaddwebserviceresources without providing description the list on the root api path fails with undefined index description errors for instance php param array params return array public function hookaddwebserviceresources params return array colissimo custom products array class colissimocustomproduct forbidden method array head post put delete colissimo custom categories array class colissimocustomcategory forbidden method array head post put delete colissimo ace array class colissimoace forbidden method array get head put delete the error is raised expected behavior being able to list webservice ressources even if no description as been provided steps to reproduce steps to reproduce the behavior install colissimo offical module for instance create a webservice api key open see error additional information prestashop version php version
0
14,482
2,813,030,457
IssuesEvent
2015-05-18 12:38:58
andrew867/epassportviewer
https://api.github.com/repos/andrew867/epassportviewer
closed
E-passport viewer does not read the passport in Mac OSX
auto-migrated Priority-Medium Type-Defect
``` What steps will reproduce the problem? 1. Connected the OMNIKEY CARDMAN 5321 USB 2. Installed driver from hidglobal.com: ifdokrfid_mac_intel_10.5_i386-2.4.0.dmg (My Mac OSX version is 10.5.7) and restarted the computer when asked 3. Copied the e-passport application to the Applications folder 4. Started the e-passport app. 5. Put the passport on the omnikey 5321 reader - its indicator was green and blinked red for several seconds 6. Entered the MRZ data 7.Pressed the READ button in the e-passport app. 8. Took and dropped the passport on the reader when asked - it blinked with red light again for several seconds 9. Pressed OK 10. Nothing has been read - no reading window appeared. Could you please advise and assist? Note: After this failure I tried and succeed to read my passport on the PC with Windows XP 32bit installed with this Omnikey 5321 reader. ``` Original issue reported on code.google.com by `infokir2...@gmail.com` on 15 Feb 2011 at 3:22
1.0
E-passport viewer does not read the passport in Mac OSX - ``` What steps will reproduce the problem? 1. Connected the OMNIKEY CARDMAN 5321 USB 2. Installed driver from hidglobal.com: ifdokrfid_mac_intel_10.5_i386-2.4.0.dmg (My Mac OSX version is 10.5.7) and restarted the computer when asked 3. Copied the e-passport application to the Applications folder 4. Started the e-passport app. 5. Put the passport on the omnikey 5321 reader - its indicator was green and blinked red for several seconds 6. Entered the MRZ data 7.Pressed the READ button in the e-passport app. 8. Took and dropped the passport on the reader when asked - it blinked with red light again for several seconds 9. Pressed OK 10. Nothing has been read - no reading window appeared. Could you please advise and assist? Note: After this failure I tried and succeed to read my passport on the PC with Windows XP 32bit installed with this Omnikey 5321 reader. ``` Original issue reported on code.google.com by `infokir2...@gmail.com` on 15 Feb 2011 at 3:22
defect
e passport viewer does not read the passport in mac osx what steps will reproduce the problem connected the omnikey cardman usb installed driver from hidglobal com ifdokrfid mac intel dmg my mac osx version is and restarted the computer when asked copied the e passport application to the applications folder started the e passport app put the passport on the omnikey reader its indicator was green and blinked red for several seconds entered the mrz data pressed the read button in the e passport app took and dropped the passport on the reader when asked it blinked with red light again for several seconds pressed ok nothing has been read no reading window appeared could you please advise and assist note after this failure i tried and succeed to read my passport on the pc with windows xp installed with this omnikey reader original issue reported on code google com by gmail com on feb at
1
46,752
13,181,223,310
IssuesEvent
2020-08-12 14:01:00
dotnet/aspnetcore
https://api.github.com/repos/dotnet/aspnetcore
closed
Microsoft.AspNetCore.Authentication.OpenIdConnect - Unable to change nonce expiration time
:heavy_check_mark: Resolution: Answered Status: Resolved area-security
Hi all. I have created new sample IdentityServer4 (with UI) and consuming MVC Client app, hosting them both locally. If I try to login with sample user within 15 minutes of opening login page, everything works fine. However, I am unable to allow users to log in if the login page has become stale (opened for more than default 15 minutes). I have modified OpenIdConnect configuration in the consuming MVC app by adding these to lines in order to increase remote authentication timeout (correlation cookie) and nonce expiration: ![image](https://user-images.githubusercontent.com/11403710/89792719-3ce2f080-db25-11ea-901d-a90bef3794c4.png) Inspecting the response in chrome after navigating to login page, the cookie values are correct (24h in future, minus 2h of time zone difference) ![image](https://user-images.githubusercontent.com/11403710/89792770-479d8580-db25-11ea-819f-5e0d58bec217.png) But if I leave the page opened for more than 15 minutes and try to log in, I get the following error: ![image](https://user-images.githubusercontent.com/11403710/89792813-55530b00-db25-11ea-8407-3a5f8cc35d7f.png) As if the change was not applied - it is reporting that nonce had expired and showing default lifetime of 15 minutes. What am I missing here? Where is the mismatch coming from? I've tried clearing cookies, using incognito, both Chrome and Edge - the result is always the same. I've already asked question here - https://github.com/IdentityServer/IdentityServer4/issues/4728 but was pointed here by brockallen.
True
Microsoft.AspNetCore.Authentication.OpenIdConnect - Unable to change nonce expiration time - Hi all. I have created new sample IdentityServer4 (with UI) and consuming MVC Client app, hosting them both locally. If I try to login with sample user within 15 minutes of opening login page, everything works fine. However, I am unable to allow users to log in if the login page has become stale (opened for more than default 15 minutes). I have modified OpenIdConnect configuration in the consuming MVC app by adding these to lines in order to increase remote authentication timeout (correlation cookie) and nonce expiration: ![image](https://user-images.githubusercontent.com/11403710/89792719-3ce2f080-db25-11ea-901d-a90bef3794c4.png) Inspecting the response in chrome after navigating to login page, the cookie values are correct (24h in future, minus 2h of time zone difference) ![image](https://user-images.githubusercontent.com/11403710/89792770-479d8580-db25-11ea-819f-5e0d58bec217.png) But if I leave the page opened for more than 15 minutes and try to log in, I get the following error: ![image](https://user-images.githubusercontent.com/11403710/89792813-55530b00-db25-11ea-8407-3a5f8cc35d7f.png) As if the change was not applied - it is reporting that nonce had expired and showing default lifetime of 15 minutes. What am I missing here? Where is the mismatch coming from? I've tried clearing cookies, using incognito, both Chrome and Edge - the result is always the same. I've already asked question here - https://github.com/IdentityServer/IdentityServer4/issues/4728 but was pointed here by brockallen.
non_defect
microsoft aspnetcore authentication openidconnect unable to change nonce expiration time hi all i have created new sample with ui and consuming mvc client app hosting them both locally if i try to login with sample user within minutes of opening login page everything works fine however i am unable to allow users to log in if the login page has become stale opened for more than default minutes i have modified openidconnect configuration in the consuming mvc app by adding these to lines in order to increase remote authentication timeout correlation cookie and nonce expiration inspecting the response in chrome after navigating to login page the cookie values are correct in future minus of time zone difference but if i leave the page opened for more than minutes and try to log in i get the following error as if the change was not applied it is reporting that nonce had expired and showing default lifetime of minutes what am i missing here where is the mismatch coming from i ve tried clearing cookies using incognito both chrome and edge the result is always the same i ve already asked question here but was pointed here by brockallen
0
18,740
3,083,751,081
IssuesEvent
2015-08-24 11:03:12
contao/core-bundle
https://api.github.com/repos/contao/core-bundle
closed
Remove the `eager` and `lazy` loading options
defect needs backport up for discussion
As e.g. stated in contao/core#7928, the model handling needs to be fixed and optimized. **Eager and lazy loading** Right now it is possible to pass `['eager' => true]` as an option to enforce eager loading of related models. However, it is not possible to pass `['eager' => false]` to enforce lazy loading. **Retrieving related models** Currently the query builder generates a JOIN statement and adds the related columns to every row. The advantage of this approach is that there is only one database query. The disadvantage is that the amount of data sent is quite big. **A possible solution** To solve both issues, we should simply remove the `eager` and `lazy` options from the DCA and always use lazy loading unless `['eager' => true]` is explicitly set. It will keep the queries small and it will load existing models from the registry instead of creating them from the DB result. @contao/developers What do you think?
1.0
Remove the `eager` and `lazy` loading options - As e.g. stated in contao/core#7928, the model handling needs to be fixed and optimized. **Eager and lazy loading** Right now it is possible to pass `['eager' => true]` as an option to enforce eager loading of related models. However, it is not possible to pass `['eager' => false]` to enforce lazy loading. **Retrieving related models** Currently the query builder generates a JOIN statement and adds the related columns to every row. The advantage of this approach is that there is only one database query. The disadvantage is that the amount of data sent is quite big. **A possible solution** To solve both issues, we should simply remove the `eager` and `lazy` options from the DCA and always use lazy loading unless `['eager' => true]` is explicitly set. It will keep the queries small and it will load existing models from the registry instead of creating them from the DB result. @contao/developers What do you think?
defect
remove the eager and lazy loading options as e g stated in contao core the model handling needs to be fixed and optimized eager and lazy loading right now it is possible to pass as an option to enforce eager loading of related models however it is not possible to pass to enforce lazy loading retrieving related models currently the query builder generates a join statement and adds the related columns to every row the advantage of this approach is that there is only one database query the disadvantage is that the amount of data sent is quite big a possible solution to solve both issues we should simply remove the eager and lazy options from the dca and always use lazy loading unless is explicitly set it will keep the queries small and it will load existing models from the registry instead of creating them from the db result contao developers what do you think
1
33,969
7,314,750,367
IssuesEvent
2018-03-01 08:39:16
TV-Rename/tvrename
https://api.github.com/repos/TV-Rename/tvrename
closed
Merging episodes breaks naming for following episodes
Priority-Medium Retest needed Type-Defect auto-migrated
``` What steps will reproduce the problem? 1. Merge episode 1 & 2 of Heroes season 4 2. 3. What is the expected output? What do you see instead? - Merge the episodes, name the rest accordingly. Instead, the name of episode 2 (Jump, Push, Fall) is applied to Episode 3, the name of episode 3 (Ink) is applied to Episode 4, and so on. Merging the first two episodes seems to break the naming of all the following episodes. What version of the product are you using? On what operating system? - 2.2.0a7, Windows 7 Please provide any additional information below. ``` Original issue reported on code.google.com by `coveb...@gmail.com` on 7 Oct 2009 at 8:09
1.0
Merging episodes breaks naming for following episodes - ``` What steps will reproduce the problem? 1. Merge episode 1 & 2 of Heroes season 4 2. 3. What is the expected output? What do you see instead? - Merge the episodes, name the rest accordingly. Instead, the name of episode 2 (Jump, Push, Fall) is applied to Episode 3, the name of episode 3 (Ink) is applied to Episode 4, and so on. Merging the first two episodes seems to break the naming of all the following episodes. What version of the product are you using? On what operating system? - 2.2.0a7, Windows 7 Please provide any additional information below. ``` Original issue reported on code.google.com by `coveb...@gmail.com` on 7 Oct 2009 at 8:09
defect
merging episodes breaks naming for following episodes what steps will reproduce the problem merge episode of heroes season what is the expected output what do you see instead merge the episodes name the rest accordingly instead the name of episode jump push fall is applied to episode the name of episode ink is applied to episode and so on merging the first two episodes seems to break the naming of all the following episodes what version of the product are you using on what operating system windows please provide any additional information below original issue reported on code google com by coveb gmail com on oct at
1
70,697
23,283,677,313
IssuesEvent
2022-08-05 14:26:13
vector-im/element-web
https://api.github.com/repos/vector-im/element-web
opened
Join rules are shown as a blank timeline item
T-Defect
### Steps to reproduce 1. Create a new private room 2. Expand the 'configuring room' steps 3. Note there's a blank timeline entry, which if you view source turns out to be the join rules: ![Uploading Screenshot 2022-08-05 at 15.24.21.png…]() ### Outcome #### What did you expect? no mystery blank timeline entries. ### Operating system macOS ### Application version nightly ### How did you install the app? nightly ### Homeserver matrix.org ### Will you send logs? No
1.0
Join rules are shown as a blank timeline item - ### Steps to reproduce 1. Create a new private room 2. Expand the 'configuring room' steps 3. Note there's a blank timeline entry, which if you view source turns out to be the join rules: ![Uploading Screenshot 2022-08-05 at 15.24.21.png…]() ### Outcome #### What did you expect? no mystery blank timeline entries. ### Operating system macOS ### Application version nightly ### How did you install the app? nightly ### Homeserver matrix.org ### Will you send logs? No
defect
join rules are shown as a blank timeline item steps to reproduce create a new private room expand the configuring room steps note there s a blank timeline entry which if you view source turns out to be the join rules outcome what did you expect no mystery blank timeline entries operating system macos application version nightly how did you install the app nightly homeserver matrix org will you send logs no
1
102,704
16,579,113,568
IssuesEvent
2021-05-31 09:15:51
AlexRogalskiy/github-action-quotes
https://api.github.com/repos/AlexRogalskiy/github-action-quotes
opened
CVE-2021-33623 (Medium) detected in trim-newlines-3.0.0.tgz
security vulnerability
## CVE-2021-33623 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>trim-newlines-3.0.0.tgz</b></p></summary> <p>Trim newlines from the start and/or end of a string</p> <p>Library home page: <a href="https://registry.npmjs.org/trim-newlines/-/trim-newlines-3.0.0.tgz">https://registry.npmjs.org/trim-newlines/-/trim-newlines-3.0.0.tgz</a></p> <p>Path to dependency file: github-action-quotes/package.json</p> <p>Path to vulnerable library: github-action-quotes/node_modules/trim-newlines/package.json</p> <p> Dependency Hierarchy: - commit-analyzer-8.0.1.tgz (Root Library) - conventional-commits-parser-3.2.1.tgz - meow-8.1.2.tgz - :x: **trim-newlines-3.0.0.tgz** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/AlexRogalskiy/github-action-quotes/commit/cbfa082439faa7108cc86a6736eb4b06db126fce">cbfa082439faa7108cc86a6736eb4b06db126fce</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> The trim-newlines package before 3.0.1 and 4.x before 4.0.1 for Node.js has an issue related to regular expression denial-of-service (ReDoS) for the .end() method. <p>Publish Date: 2021-05-28 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-33623>CVE-2021-33623</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.3</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: Low - Availability Impact: None </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-33623">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-33623</a></p> <p>Release Date: 2021-05-28</p> <p>Fix Resolution: trim-newlines - 3.0.1, 4.0.1</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
CVE-2021-33623 (Medium) detected in trim-newlines-3.0.0.tgz - ## CVE-2021-33623 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>trim-newlines-3.0.0.tgz</b></p></summary> <p>Trim newlines from the start and/or end of a string</p> <p>Library home page: <a href="https://registry.npmjs.org/trim-newlines/-/trim-newlines-3.0.0.tgz">https://registry.npmjs.org/trim-newlines/-/trim-newlines-3.0.0.tgz</a></p> <p>Path to dependency file: github-action-quotes/package.json</p> <p>Path to vulnerable library: github-action-quotes/node_modules/trim-newlines/package.json</p> <p> Dependency Hierarchy: - commit-analyzer-8.0.1.tgz (Root Library) - conventional-commits-parser-3.2.1.tgz - meow-8.1.2.tgz - :x: **trim-newlines-3.0.0.tgz** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/AlexRogalskiy/github-action-quotes/commit/cbfa082439faa7108cc86a6736eb4b06db126fce">cbfa082439faa7108cc86a6736eb4b06db126fce</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> The trim-newlines package before 3.0.1 and 4.x before 4.0.1 for Node.js has an issue related to regular expression denial-of-service (ReDoS) for the .end() method. <p>Publish Date: 2021-05-28 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-33623>CVE-2021-33623</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.3</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: Low - Availability Impact: None </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-33623">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-33623</a></p> <p>Release Date: 2021-05-28</p> <p>Fix Resolution: trim-newlines - 3.0.1, 4.0.1</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_defect
cve medium detected in trim newlines tgz cve medium severity vulnerability vulnerable library trim newlines tgz trim newlines from the start and or end of a string library home page a href path to dependency file github action quotes package json path to vulnerable library github action quotes node modules trim newlines package json dependency hierarchy commit analyzer tgz root library conventional commits parser tgz meow tgz x trim newlines tgz vulnerable library found in head commit a href vulnerability details the trim newlines package before and x before for node js has an issue related to regular expression denial of service redos for the end method publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact low availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution trim newlines step up your open source security game with whitesource
0
169,452
13,148,066,981
IssuesEvent
2020-08-08 19:11:59
Rocologo/MobHunting
https://api.github.com/repos/Rocologo/MobHunting
closed
Could not pass event PlayerDeathEvent to MobHunting v7.0.5-SNAPSHOT-B984 java.lang.NullPointerException: null
Fixed - To be tested
[08:54:46 ERROR]: Could not pass event PlayerDeathEvent to MobHunting v7.0.5-SNAPSHOT-B984 java.lang.NullPointerException: null at one.lindegaard.MobHunting.rewards.CustomItems.getPlayerHead(CustomItems.java:92) ~[?:?] at one.lindegaard.MobHunting.MobHuntingManager.onMobDeath(MobHuntingManager.java:1944) ~[?:?] at com.destroystokyo.paper.event.executor.MethodHandleEventExecutor.execute(MethodHandleEventExecutor.java:37) ~[patched_1.15.2.jar:git-Paper-183] at co.aikar.timings.TimedEventExecutor.execute(TimedEventExecutor.java:80) ~[patched_1.15.2.jar:git-Paper-183] at org.bukkit.plugin.RegisteredListener.callEvent(RegisteredListener.java:70) ~[patched_1.15.2.jar:git-Paper-18 3] at org.bukkit.plugin.SimplePluginManager.callEvent(SimplePluginManager.java:607) ~[patched_1.15.2.jar:git-Paper -183] at org.bukkit.craftbukkit.v1_15_R1.event.CraftEventFactory.callPlayerDeathEvent(CraftEventFactory.java:784) ~[p atched_1.15.2.jar:git-Paper-183] at net.minecraft.server.v1_15_R1.EntityPlayer.die(EntityPlayer.java:598) ~[patched_1.15.2.jar:git-Paper-183] at net.minecraft.server.v1_15_R1.EntityLiving.damageEntity(EntityLiving.java:1152) ~[patched_1.15.2.jar:git-Pap er-183] at net.minecraft.server.v1_15_R1.EntityHuman.damageEntity(EntityHuman.java:801) ~[patched_1.15.2.jar:git-Paper- 183] at net.minecraft.server.v1_15_R1.EntityPlayer.damageEntity(EntityPlayer.java:754) ~[patched_1.15.2.jar:git-Pape r-183] at net.minecraft.server.v1_15_R1.EntityHuman.attack(EntityHuman.java:1113) ~[patched_1.15.2.jar:git-Paper-183] at net.minecraft.server.v1_15_R1.EntityPlayer.attack(EntityPlayer.java:1706) ~[patched_1.15.2.jar:git-Paper-183 ] at net.minecraft.server.v1_15_R1.PlayerConnection.a(PlayerConnection.java:2054) ~[patched_1.15.2.jar:git-Paper- 183] at net.minecraft.server.v1_15_R1.PacketPlayInUseEntity.a(PacketPlayInUseEntity.java:51) ~[patched_1.15.2.jar:gi t-Paper-183] at net.minecraft.server.v1_15_R1.PacketPlayInUseEntity.a(PacketPlayInUseEntity.java:6) ~[patched_1.15.2.jar:git -Paper-183] at net.minecraft.server.v1_15_R1.PlayerConnectionUtils.lambda$ensureMainThread$0(PlayerConnectionUtils.java:23) ~[patched_1.15.2.jar:git-Paper-183] at net.minecraft.server.v1_15_R1.TickTask.run(SourceFile:18) ~[patched_1.15.2.jar:git-Paper-183] at net.minecraft.server.v1_15_R1.IAsyncTaskHandler.executeTask(IAsyncTaskHandler.java:136) ~[patched_1.15.2.jar :git-Paper-183] at net.minecraft.server.v1_15_R1.IAsyncTaskHandlerReentrant.executeTask(SourceFile:23) ~[patched_1.15.2.jar:git -Paper-183] at net.minecraft.server.v1_15_R1.IAsyncTaskHandler.executeNext(IAsyncTaskHandler.java:109) ~[patched_1.15.2.jar :git-Paper-183] at net.minecraft.server.v1_15_R1.MinecraftServer.ba(MinecraftServer.java:1059) ~[patched_1.15.2.jar:git-Paper-1 83] at net.minecraft.server.v1_15_R1.MinecraftServer.executeNext(MinecraftServer.java:1052) ~[patched_1.15.2.jar:gi t-Paper-183] at net.minecraft.server.v1_15_R1.IAsyncTaskHandler.awaitTasks(IAsyncTaskHandler.java:119) ~[patched_1.15.2.jar: git-Paper-183] at net.minecraft.server.v1_15_R1.MinecraftServer.sleepForTick(MinecraftServer.java:1022) ~[patched_1.15.2.jar:g it-Paper-183] at net.minecraft.server.v1_15_R1.MinecraftServer.run(MinecraftServer.java:945) ~[patched_1.15.2.jar:git-Paper-1 83] at java.lang.Thread.run(Thread.java:834) [?:?]
1.0
Could not pass event PlayerDeathEvent to MobHunting v7.0.5-SNAPSHOT-B984 java.lang.NullPointerException: null - [08:54:46 ERROR]: Could not pass event PlayerDeathEvent to MobHunting v7.0.5-SNAPSHOT-B984 java.lang.NullPointerException: null at one.lindegaard.MobHunting.rewards.CustomItems.getPlayerHead(CustomItems.java:92) ~[?:?] at one.lindegaard.MobHunting.MobHuntingManager.onMobDeath(MobHuntingManager.java:1944) ~[?:?] at com.destroystokyo.paper.event.executor.MethodHandleEventExecutor.execute(MethodHandleEventExecutor.java:37) ~[patched_1.15.2.jar:git-Paper-183] at co.aikar.timings.TimedEventExecutor.execute(TimedEventExecutor.java:80) ~[patched_1.15.2.jar:git-Paper-183] at org.bukkit.plugin.RegisteredListener.callEvent(RegisteredListener.java:70) ~[patched_1.15.2.jar:git-Paper-18 3] at org.bukkit.plugin.SimplePluginManager.callEvent(SimplePluginManager.java:607) ~[patched_1.15.2.jar:git-Paper -183] at org.bukkit.craftbukkit.v1_15_R1.event.CraftEventFactory.callPlayerDeathEvent(CraftEventFactory.java:784) ~[p atched_1.15.2.jar:git-Paper-183] at net.minecraft.server.v1_15_R1.EntityPlayer.die(EntityPlayer.java:598) ~[patched_1.15.2.jar:git-Paper-183] at net.minecraft.server.v1_15_R1.EntityLiving.damageEntity(EntityLiving.java:1152) ~[patched_1.15.2.jar:git-Pap er-183] at net.minecraft.server.v1_15_R1.EntityHuman.damageEntity(EntityHuman.java:801) ~[patched_1.15.2.jar:git-Paper- 183] at net.minecraft.server.v1_15_R1.EntityPlayer.damageEntity(EntityPlayer.java:754) ~[patched_1.15.2.jar:git-Pape r-183] at net.minecraft.server.v1_15_R1.EntityHuman.attack(EntityHuman.java:1113) ~[patched_1.15.2.jar:git-Paper-183] at net.minecraft.server.v1_15_R1.EntityPlayer.attack(EntityPlayer.java:1706) ~[patched_1.15.2.jar:git-Paper-183 ] at net.minecraft.server.v1_15_R1.PlayerConnection.a(PlayerConnection.java:2054) ~[patched_1.15.2.jar:git-Paper- 183] at net.minecraft.server.v1_15_R1.PacketPlayInUseEntity.a(PacketPlayInUseEntity.java:51) ~[patched_1.15.2.jar:gi t-Paper-183] at net.minecraft.server.v1_15_R1.PacketPlayInUseEntity.a(PacketPlayInUseEntity.java:6) ~[patched_1.15.2.jar:git -Paper-183] at net.minecraft.server.v1_15_R1.PlayerConnectionUtils.lambda$ensureMainThread$0(PlayerConnectionUtils.java:23) ~[patched_1.15.2.jar:git-Paper-183] at net.minecraft.server.v1_15_R1.TickTask.run(SourceFile:18) ~[patched_1.15.2.jar:git-Paper-183] at net.minecraft.server.v1_15_R1.IAsyncTaskHandler.executeTask(IAsyncTaskHandler.java:136) ~[patched_1.15.2.jar :git-Paper-183] at net.minecraft.server.v1_15_R1.IAsyncTaskHandlerReentrant.executeTask(SourceFile:23) ~[patched_1.15.2.jar:git -Paper-183] at net.minecraft.server.v1_15_R1.IAsyncTaskHandler.executeNext(IAsyncTaskHandler.java:109) ~[patched_1.15.2.jar :git-Paper-183] at net.minecraft.server.v1_15_R1.MinecraftServer.ba(MinecraftServer.java:1059) ~[patched_1.15.2.jar:git-Paper-1 83] at net.minecraft.server.v1_15_R1.MinecraftServer.executeNext(MinecraftServer.java:1052) ~[patched_1.15.2.jar:gi t-Paper-183] at net.minecraft.server.v1_15_R1.IAsyncTaskHandler.awaitTasks(IAsyncTaskHandler.java:119) ~[patched_1.15.2.jar: git-Paper-183] at net.minecraft.server.v1_15_R1.MinecraftServer.sleepForTick(MinecraftServer.java:1022) ~[patched_1.15.2.jar:g it-Paper-183] at net.minecraft.server.v1_15_R1.MinecraftServer.run(MinecraftServer.java:945) ~[patched_1.15.2.jar:git-Paper-1 83] at java.lang.Thread.run(Thread.java:834) [?:?]
non_defect
could not pass event playerdeathevent to mobhunting snapshot java lang nullpointerexception null could not pass event playerdeathevent to mobhunting snapshot java lang nullpointerexception null at one lindegaard mobhunting rewards customitems getplayerhead customitems java at one lindegaard mobhunting mobhuntingmanager onmobdeath mobhuntingmanager java at com destroystokyo paper event executor methodhandleeventexecutor execute methodhandleeventexecutor java at co aikar timings timedeventexecutor execute timedeventexecutor java at org bukkit plugin registeredlistener callevent registeredlistener java patched jar git paper at org bukkit plugin simplepluginmanager callevent simplepluginmanager java patched jar git paper at org bukkit craftbukkit event crafteventfactory callplayerdeathevent crafteventfactory java p atched jar git paper at net minecraft server entityplayer die entityplayer java at net minecraft server entityliving damageentity entityliving java patched jar git pap er at net minecraft server entityhuman damageentity entityhuman java patched jar git paper at net minecraft server entityplayer damageentity entityplayer java patched jar git pape r at net minecraft server entityhuman attack entityhuman java at net minecraft server entityplayer attack entityplayer java patched jar git paper at net minecraft server playerconnection a playerconnection java patched jar git paper at net minecraft server packetplayinuseentity a packetplayinuseentity java patched jar gi t paper at net minecraft server packetplayinuseentity a packetplayinuseentity java patched jar git paper at net minecraft server playerconnectionutils lambda ensuremainthread playerconnectionutils java at net minecraft server ticktask run sourcefile at net minecraft server iasynctaskhandler executetask iasynctaskhandler java patched jar git paper at net minecraft server iasynctaskhandlerreentrant executetask sourcefile patched jar git paper at net minecraft server iasynctaskhandler executenext iasynctaskhandler java patched jar git paper at net minecraft server minecraftserver ba minecraftserver java patched jar git paper at net minecraft server minecraftserver executenext minecraftserver java patched jar gi t paper at net minecraft server iasynctaskhandler awaittasks iasynctaskhandler java patched jar git paper at net minecraft server minecraftserver sleepfortick minecraftserver java patched jar g it paper at net minecraft server minecraftserver run minecraftserver java patched jar git paper at java lang thread run thread java
0
3,655
6,691,849,560
IssuesEvent
2017-10-09 14:29:34
oasis-tcs/sarif-spec
https://api.github.com/repos/oasis-tcs/sarif-spec
closed
Define driving principles for SARIF effort
process
We should articulate and maintain a set of driving principles for the SARIF format. These principles should define a vision for the format in general and be useful for resolving difficult design decisions. Below is a starter list that we should refine, add to or subtract from. 1. SARIF is primarily designed to advance the industry by providing the best direct production format possible. Aggregating results from other formats is another important scenario but secondary to direct production. 2. SARIF defines a range of data that shall be expressed in order to best support static analysis tooling. The specification describes a JSON implementation of this standard. It should be possible to define other implementations (such as XML). 3. SARIF is designed for static analysis tools and any concept that generally applies for this scenario shall be considered for the format. SARIF can clearly be used for many dynamic analysis scenarios and we should consider augmenting the format for this class of tooling, but not in cases where what is proposed is applicable to the dynamic analysis domain only. 4. SARIF is domain-agnostic; that is, it does not contain objects or properties that are specific to a single domain, such as security or compliance. However, SARIF might define specific values for properties that are specific to a single domain. For example, the proposed result.taxonomies property might define a dictionary entry whose key invokes a standard classification for memory safety issues only. 5. The SARIF design is focused on expressing results as produced by a tool at a specific point-in-time and current excludes detailed thinking related to results management (associated result work item, false positive evaluation, etc.). These concepts may be addressed by defining or proposing 'profiles' that broaden SARIF's design surface area, contingent on progress with core work.
1.0
Define driving principles for SARIF effort - We should articulate and maintain a set of driving principles for the SARIF format. These principles should define a vision for the format in general and be useful for resolving difficult design decisions. Below is a starter list that we should refine, add to or subtract from. 1. SARIF is primarily designed to advance the industry by providing the best direct production format possible. Aggregating results from other formats is another important scenario but secondary to direct production. 2. SARIF defines a range of data that shall be expressed in order to best support static analysis tooling. The specification describes a JSON implementation of this standard. It should be possible to define other implementations (such as XML). 3. SARIF is designed for static analysis tools and any concept that generally applies for this scenario shall be considered for the format. SARIF can clearly be used for many dynamic analysis scenarios and we should consider augmenting the format for this class of tooling, but not in cases where what is proposed is applicable to the dynamic analysis domain only. 4. SARIF is domain-agnostic; that is, it does not contain objects or properties that are specific to a single domain, such as security or compliance. However, SARIF might define specific values for properties that are specific to a single domain. For example, the proposed result.taxonomies property might define a dictionary entry whose key invokes a standard classification for memory safety issues only. 5. The SARIF design is focused on expressing results as produced by a tool at a specific point-in-time and current excludes detailed thinking related to results management (associated result work item, false positive evaluation, etc.). These concepts may be addressed by defining or proposing 'profiles' that broaden SARIF's design surface area, contingent on progress with core work.
non_defect
define driving principles for sarif effort we should articulate and maintain a set of driving principles for the sarif format these principles should define a vision for the format in general and be useful for resolving difficult design decisions below is a starter list that we should refine add to or subtract from sarif is primarily designed to advance the industry by providing the best direct production format possible aggregating results from other formats is another important scenario but secondary to direct production sarif defines a range of data that shall be expressed in order to best support static analysis tooling the specification describes a json implementation of this standard it should be possible to define other implementations such as xml sarif is designed for static analysis tools and any concept that generally applies for this scenario shall be considered for the format sarif can clearly be used for many dynamic analysis scenarios and we should consider augmenting the format for this class of tooling but not in cases where what is proposed is applicable to the dynamic analysis domain only sarif is domain agnostic that is it does not contain objects or properties that are specific to a single domain such as security or compliance however sarif might define specific values for properties that are specific to a single domain for example the proposed result taxonomies property might define a dictionary entry whose key invokes a standard classification for memory safety issues only the sarif design is focused on expressing results as produced by a tool at a specific point in time and current excludes detailed thinking related to results management associated result work item false positive evaluation etc these concepts may be addressed by defining or proposing profiles that broaden sarif s design surface area contingent on progress with core work
0
68,054
13,065,826,887
IssuesEvent
2020-07-30 20:30:05
microsoft/vscode-python
https://api.github.com/repos/microsoft/vscode-python
closed
Jupyter Notebook Auto-Import?
data science ds-vscode-notebook type-bug
Hi, sorry if I'm missing this, but does Pylance support auto-imports / support prompting for imports when writing in a Jupyter notebook? It's at least not working for me out of the box on latest Pylance. Autocomplete and type sensing seems to be working, but having auto-imports (i.e., type FooClass and the IntelliSense prompts to add `import FooClass`) would be a great time-saver.
1.0
Jupyter Notebook Auto-Import? - Hi, sorry if I'm missing this, but does Pylance support auto-imports / support prompting for imports when writing in a Jupyter notebook? It's at least not working for me out of the box on latest Pylance. Autocomplete and type sensing seems to be working, but having auto-imports (i.e., type FooClass and the IntelliSense prompts to add `import FooClass`) would be a great time-saver.
non_defect
jupyter notebook auto import hi sorry if i m missing this but does pylance support auto imports support prompting for imports when writing in a jupyter notebook it s at least not working for me out of the box on latest pylance autocomplete and type sensing seems to be working but having auto imports i e type fooclass and the intellisense prompts to add import fooclass would be a great time saver
0
279,423
24,224,193,979
IssuesEvent
2022-09-26 13:17:46
vegaprotocol/frontend-monorepo
https://api.github.com/repos/vegaprotocol/frontend-monorepo
closed
Switch test run to self hosted runners
Testing 🧪 chore
## The Chore Ops have kindly provided us with self hosted runners. We now need to switch the work flows to use the runners and check that everything runs ok.
1.0
Switch test run to self hosted runners - ## The Chore Ops have kindly provided us with self hosted runners. We now need to switch the work flows to use the runners and check that everything runs ok.
non_defect
switch test run to self hosted runners the chore ops have kindly provided us with self hosted runners we now need to switch the work flows to use the runners and check that everything runs ok
0
79,952
29,796,725,334
IssuesEvent
2023-06-16 03:30:17
zed-industries/community
https://api.github.com/repos/zed-industries/community
opened
In the integrated terminal, swiftly tapping the "Up + Enter" keystroke combination will overlook the carriage return
defect triage admin read
### Check for existing issues - [X] Completed ### Describe the bug / provide steps to reproduce it Not solely limited to the carriage return key, any swift combination involving the "Up" or "Down" keys (for browsing terminal history) will be overlooked. ### Environment Zed: v0.90.2 (stable) OS: macOS 13.4.0 Memory: 16 GiB Architecture: aarch64 ### If applicable, add mockups / screenshots to help explain present your vision of the feature https://github.com/zed-industries/community/assets/15730824/2b9e6c8e-2ed0-43f8-b6ab-648850c087c2 By way of comparison, in the external terminal, the identical procedure yields the following outcome. https://github.com/zed-industries/community/assets/15730824/465f3c70-c48c-4b9d-9428-bb681fe7fc53 ### If applicable, attach your `~/Library/Logs/Zed/Zed.log` file to this issue. If you only need the most recent lines, you can run the `zed: open log` command palette action to see the last 1000. _No response_
1.0
In the integrated terminal, swiftly tapping the "Up + Enter" keystroke combination will overlook the carriage return - ### Check for existing issues - [X] Completed ### Describe the bug / provide steps to reproduce it Not solely limited to the carriage return key, any swift combination involving the "Up" or "Down" keys (for browsing terminal history) will be overlooked. ### Environment Zed: v0.90.2 (stable) OS: macOS 13.4.0 Memory: 16 GiB Architecture: aarch64 ### If applicable, add mockups / screenshots to help explain present your vision of the feature https://github.com/zed-industries/community/assets/15730824/2b9e6c8e-2ed0-43f8-b6ab-648850c087c2 By way of comparison, in the external terminal, the identical procedure yields the following outcome. https://github.com/zed-industries/community/assets/15730824/465f3c70-c48c-4b9d-9428-bb681fe7fc53 ### If applicable, attach your `~/Library/Logs/Zed/Zed.log` file to this issue. If you only need the most recent lines, you can run the `zed: open log` command palette action to see the last 1000. _No response_
defect
in the integrated terminal swiftly tapping the up enter keystroke combination will overlook the carriage return check for existing issues completed describe the bug provide steps to reproduce it not solely limited to the carriage return key any swift combination involving the up or down keys for browsing terminal history will be overlooked environment zed stable os macos memory gib architecture if applicable add mockups screenshots to help explain present your vision of the feature by way of comparison in the external terminal the identical procedure yields the following outcome if applicable attach your library logs zed zed log file to this issue if you only need the most recent lines you can run the zed open log command palette action to see the last no response
1
6,978
10,129,759,576
IssuesEvent
2019-08-01 15:25:34
ptddatateam/PublicTransitDataReportingPortal
https://api.github.com/repos/ptddatateam/PublicTransitDataReportingPortal
opened
Prep for meeting to discuss Summary of Public Transportation Vision
process
Prep for meeting to discuss Summary of Public Transportation Vision
1.0
Prep for meeting to discuss Summary of Public Transportation Vision - Prep for meeting to discuss Summary of Public Transportation Vision
non_defect
prep for meeting to discuss summary of public transportation vision prep for meeting to discuss summary of public transportation vision
0
219,792
7,345,942,663
IssuesEvent
2018-03-07 19:03:10
StrangeLoopGames/EcoIssues
https://api.github.com/repos/StrangeLoopGames/EcoIssues
closed
Aqueducts don't transport water far enough?
High Priority
Had some people complain about aqueducts - let's make sure they work right, because they are very important for farming now.
1.0
Aqueducts don't transport water far enough? - Had some people complain about aqueducts - let's make sure they work right, because they are very important for farming now.
non_defect
aqueducts don t transport water far enough had some people complain about aqueducts let s make sure they work right because they are very important for farming now
0
24,032
3,900,486,202
IssuesEvent
2016-04-18 06:23:04
obiwankennedy/cahootseditor
https://api.github.com/repos/obiwankennedy/cahootseditor
closed
[Feature request] Qt5 and Independant Widget
auto-migrated Priority-Medium Type-Defect
``` Hi, I'm the developer of rolisteam. And I am interested into providing collaborative text editor in my software. It is an easy way to integrate cahootseditor into another application ? I'm also interesting to know how you manage to do it, what is the status of your program? Do you plan to port your tool to Qt5 ? ``` Original issue reported on code.google.com by `renaud.g...@gmail.com` on 4 Mar 2015 at 3:22
1.0
[Feature request] Qt5 and Independant Widget - ``` Hi, I'm the developer of rolisteam. And I am interested into providing collaborative text editor in my software. It is an easy way to integrate cahootseditor into another application ? I'm also interesting to know how you manage to do it, what is the status of your program? Do you plan to port your tool to Qt5 ? ``` Original issue reported on code.google.com by `renaud.g...@gmail.com` on 4 Mar 2015 at 3:22
defect
and independant widget hi i m the developer of rolisteam and i am interested into providing collaborative text editor in my software it is an easy way to integrate cahootseditor into another application i m also interesting to know how you manage to do it what is the status of your program do you plan to port your tool to original issue reported on code google com by renaud g gmail com on mar at
1
61,831
17,023,788,378
IssuesEvent
2021-07-03 03:51:34
tomhughes/trac-tickets
https://api.github.com/repos/tomhughes/trac-tickets
closed
Nominatim: Unable to create some database functions
Component: nominatim Priority: minor Resolution: fixed Type: defect
**[Submitted to the original trac issue database at 12.41pm, Thursday, 29th March 2012]** I get this on './utils/setup.php --create-functions': ERROR: syntax error at or near "$1" LINE 1: insert into location_property_tiger (place_id, $1 , $2 , h... ^ QUERY: insert into location_property_tiger (place_id, $1 , $2 , housenumber, postcode, centroid) values (nextval('seq_place'), $1 , $2 , $3 , $4 , ST_Line_Interpolate_Point( $5 , ( $3 ::float- $6 ::float)/ $7 ::float)) CONTEXT: SQL statement in PL/PgSQL function "tigger_create_interpolation" near line 75 ERROR: syntax error at or near "$1" LINE 1: insert into location_property_aux (place_id, $1 , parent_pl... ^ QUERY: insert into location_property_aux (place_id, $1 , parent_place_id, housenumber, postcode, centroid) values (nextval('seq_place'), $1 , $2 , $3 , $4 , $5 ) CONTEXT: SQL statement in PL/PgSQL function "aux_create_property" near line 40 Using postgres 8.4 on ubuntu and Nominatim from git. Any idea?
1.0
Nominatim: Unable to create some database functions - **[Submitted to the original trac issue database at 12.41pm, Thursday, 29th March 2012]** I get this on './utils/setup.php --create-functions': ERROR: syntax error at or near "$1" LINE 1: insert into location_property_tiger (place_id, $1 , $2 , h... ^ QUERY: insert into location_property_tiger (place_id, $1 , $2 , housenumber, postcode, centroid) values (nextval('seq_place'), $1 , $2 , $3 , $4 , ST_Line_Interpolate_Point( $5 , ( $3 ::float- $6 ::float)/ $7 ::float)) CONTEXT: SQL statement in PL/PgSQL function "tigger_create_interpolation" near line 75 ERROR: syntax error at or near "$1" LINE 1: insert into location_property_aux (place_id, $1 , parent_pl... ^ QUERY: insert into location_property_aux (place_id, $1 , parent_place_id, housenumber, postcode, centroid) values (nextval('seq_place'), $1 , $2 , $3 , $4 , $5 ) CONTEXT: SQL statement in PL/PgSQL function "aux_create_property" near line 40 Using postgres 8.4 on ubuntu and Nominatim from git. Any idea?
defect
nominatim unable to create some database functions i get this on utils setup php create functions error syntax error at or near line insert into location property tiger place id h query insert into location property tiger place id housenumber postcode centroid values nextval seq place st line interpolate point float float float context sql statement in pl pgsql function tigger create interpolation near line error syntax error at or near line insert into location property aux place id parent pl query insert into location property aux place id parent place id housenumber postcode centroid values nextval seq place context sql statement in pl pgsql function aux create property near line using postgres on ubuntu and nominatim from git any idea
1
37,489
18,439,975,371
IssuesEvent
2021-10-14 16:51:28
w23/xash3d-fwgs
https://api.github.com/repos/w23/xash3d-fwgs
closed
rtx: improve surface lights culling
enhancement ray tracing performance
- [x] collect by bsp leafs and pvs - [x] static surface lights - [x] dynamic lights - [x] bbox culling - [x] cull by normal direction - ~[ ] cull by pdf/distance/"light intensity"~ - [x] check that this makes sense (by late culling in shader): IT DOESN'T. light are usually bright enough to affect really distant objects - [x] late culling in shader: - [x] cull by target geometry normal - [x] cull by max possible color (emissive * target) - [x] cull by brdf attenuated color
True
rtx: improve surface lights culling - - [x] collect by bsp leafs and pvs - [x] static surface lights - [x] dynamic lights - [x] bbox culling - [x] cull by normal direction - ~[ ] cull by pdf/distance/"light intensity"~ - [x] check that this makes sense (by late culling in shader): IT DOESN'T. light are usually bright enough to affect really distant objects - [x] late culling in shader: - [x] cull by target geometry normal - [x] cull by max possible color (emissive * target) - [x] cull by brdf attenuated color
non_defect
rtx improve surface lights culling collect by bsp leafs and pvs static surface lights dynamic lights bbox culling cull by normal direction cull by pdf distance light intensity check that this makes sense by late culling in shader it doesn t light are usually bright enough to affect really distant objects late culling in shader cull by target geometry normal cull by max possible color emissive target cull by brdf attenuated color
0
130,116
18,154,947,072
IssuesEvent
2021-09-26 22:35:12
ghc-dev/Melissa-Morgan
https://api.github.com/repos/ghc-dev/Melissa-Morgan
opened
CVE-2020-14365 (High) detected in ansible-2.9.9.tar.gz
security vulnerability
## CVE-2020-14365 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>ansible-2.9.9.tar.gz</b></p></summary> <p>Radically simple IT automation</p> <p>Library home page: <a href="https://files.pythonhosted.org/packages/00/5d/e10b83e0e6056dbd5b4809b451a191395175a57e3175ce04e35d9c5fc2a0/ansible-2.9.9.tar.gz">https://files.pythonhosted.org/packages/00/5d/e10b83e0e6056dbd5b4809b451a191395175a57e3175ce04e35d9c5fc2a0/ansible-2.9.9.tar.gz</a></p> <p>Path to dependency file: Melissa-Morgan/requirements.txt</p> <p>Path to vulnerable library: /requirements.txt</p> <p> Dependency Hierarchy: - :x: **ansible-2.9.9.tar.gz** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/ghc-dev/Melissa-Morgan/commit/ffcf632886f3acc1fcbc85083e7db7dd86a39bf8">ffcf632886f3acc1fcbc85083e7db7dd86a39bf8</a></p> <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> A flaw was found in the Ansible Engine, in ansible-engine 2.8.x before 2.8.15 and ansible-engine 2.9.x before 2.9.13, when installing packages using the dnf module. GPG signatures are ignored during installation even when disable_gpg_check is set to False, which is the default behavior. This flaw leads to malicious packages being installed on the system and arbitrary code executed via package installation scripts. The highest threat from this vulnerability is to integrity and system availability. <p>Publish Date: 2020-09-23 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-14365>CVE-2020-14365</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.1</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Local - Attack Complexity: Low - Privileges Required: Low - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: High - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://bugzilla.redhat.com/show_bug.cgi?id=1869154">https://bugzilla.redhat.com/show_bug.cgi?id=1869154</a></p> <p>Release Date: 2020-07-21</p> <p>Fix Resolution: 2.8.15,2.9.13</p> </p> </details> <p></p> *** :rescue_worker_helmet: Automatic Remediation is available for this issue <!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Python","packageName":"ansible","packageVersion":"2.9.9","packageFilePaths":["/requirements.txt"],"isTransitiveDependency":false,"dependencyTree":"ansible:2.9.9","isMinimumFixVersionAvailable":true,"minimumFixVersion":"2.8.15,2.9.13"}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2020-14365","vulnerabilityDetails":"A flaw was found in the Ansible Engine, in ansible-engine 2.8.x before 2.8.15 and ansible-engine 2.9.x before 2.9.13, when installing packages using the dnf module. GPG signatures are ignored during installation even when disable_gpg_check is set to False, which is the default behavior. This flaw leads to malicious packages being installed on the system and arbitrary code executed via package installation scripts. The highest threat from this vulnerability is to integrity and system availability.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-14365","cvss3Severity":"high","cvss3Score":"7.1","cvss3Metrics":{"A":"High","AC":"Low","PR":"Low","S":"Unchanged","C":"None","UI":"None","AV":"Local","I":"High"},"extraData":{}}</REMEDIATE> -->
True
CVE-2020-14365 (High) detected in ansible-2.9.9.tar.gz - ## CVE-2020-14365 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>ansible-2.9.9.tar.gz</b></p></summary> <p>Radically simple IT automation</p> <p>Library home page: <a href="https://files.pythonhosted.org/packages/00/5d/e10b83e0e6056dbd5b4809b451a191395175a57e3175ce04e35d9c5fc2a0/ansible-2.9.9.tar.gz">https://files.pythonhosted.org/packages/00/5d/e10b83e0e6056dbd5b4809b451a191395175a57e3175ce04e35d9c5fc2a0/ansible-2.9.9.tar.gz</a></p> <p>Path to dependency file: Melissa-Morgan/requirements.txt</p> <p>Path to vulnerable library: /requirements.txt</p> <p> Dependency Hierarchy: - :x: **ansible-2.9.9.tar.gz** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/ghc-dev/Melissa-Morgan/commit/ffcf632886f3acc1fcbc85083e7db7dd86a39bf8">ffcf632886f3acc1fcbc85083e7db7dd86a39bf8</a></p> <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> A flaw was found in the Ansible Engine, in ansible-engine 2.8.x before 2.8.15 and ansible-engine 2.9.x before 2.9.13, when installing packages using the dnf module. GPG signatures are ignored during installation even when disable_gpg_check is set to False, which is the default behavior. This flaw leads to malicious packages being installed on the system and arbitrary code executed via package installation scripts. The highest threat from this vulnerability is to integrity and system availability. <p>Publish Date: 2020-09-23 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-14365>CVE-2020-14365</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.1</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Local - Attack Complexity: Low - Privileges Required: Low - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: High - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://bugzilla.redhat.com/show_bug.cgi?id=1869154">https://bugzilla.redhat.com/show_bug.cgi?id=1869154</a></p> <p>Release Date: 2020-07-21</p> <p>Fix Resolution: 2.8.15,2.9.13</p> </p> </details> <p></p> *** :rescue_worker_helmet: Automatic Remediation is available for this issue <!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Python","packageName":"ansible","packageVersion":"2.9.9","packageFilePaths":["/requirements.txt"],"isTransitiveDependency":false,"dependencyTree":"ansible:2.9.9","isMinimumFixVersionAvailable":true,"minimumFixVersion":"2.8.15,2.9.13"}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2020-14365","vulnerabilityDetails":"A flaw was found in the Ansible Engine, in ansible-engine 2.8.x before 2.8.15 and ansible-engine 2.9.x before 2.9.13, when installing packages using the dnf module. GPG signatures are ignored during installation even when disable_gpg_check is set to False, which is the default behavior. This flaw leads to malicious packages being installed on the system and arbitrary code executed via package installation scripts. The highest threat from this vulnerability is to integrity and system availability.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-14365","cvss3Severity":"high","cvss3Score":"7.1","cvss3Metrics":{"A":"High","AC":"Low","PR":"Low","S":"Unchanged","C":"None","UI":"None","AV":"Local","I":"High"},"extraData":{}}</REMEDIATE> -->
non_defect
cve high detected in ansible tar gz cve high severity vulnerability vulnerable library ansible tar gz radically simple it automation library home page a href path to dependency file melissa morgan requirements txt path to vulnerable library requirements txt dependency hierarchy x ansible tar gz vulnerable library found in head commit a href found in base branch master vulnerability details a flaw was found in the ansible engine in ansible engine x before and ansible engine x before when installing packages using the dnf module gpg signatures are ignored during installation even when disable gpg check is set to false which is the default behavior this flaw leads to malicious packages being installed on the system and arbitrary code executed via package installation scripts the highest threat from this vulnerability is to integrity and system availability publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required low user interaction none scope unchanged impact metrics confidentiality impact none integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution rescue worker helmet automatic remediation is available for this issue isopenpronvulnerability true ispackagebased true isdefaultbranch true packages istransitivedependency false dependencytree ansible isminimumfixversionavailable true minimumfixversion basebranches vulnerabilityidentifier cve vulnerabilitydetails a flaw was found in the ansible engine in ansible engine x before and ansible engine x before when installing packages using the dnf module gpg signatures are ignored during installation even when disable gpg check is set to false which is the default behavior this flaw leads to malicious packages being installed on the system and arbitrary code executed via package installation scripts the highest threat from this vulnerability is to integrity and system availability vulnerabilityurl
0
36,740
4,757,941,728
IssuesEvent
2016-10-24 18:03:39
stamen/bluegreenway
https://api.github.com/repos/stamen/bluegreenway
closed
Image and link to PDF
Design Polish p2
Include a thumbnail and link to the brochure PDF (does not need to be lightbox) on the about page.
1.0
Image and link to PDF - Include a thumbnail and link to the brochure PDF (does not need to be lightbox) on the about page.
non_defect
image and link to pdf include a thumbnail and link to the brochure pdf does not need to be lightbox on the about page
0
126,889
12,301,656,072
IssuesEvent
2020-05-11 15:43:56
PEM-Humboldt/biotablero-scrum
https://api.github.com/repos/PEM-Humboldt/biotablero-scrum
closed
Documentar Github actions en el documento del flujo de trabajo
type:documentation
## Descripción: Agregar al documento del flujo de trabajo una explicación de los github actions configurados actualmente para los repositorios a los que aplique. ### Resultado: Documento del flujo de trabajo con la documentación de los github actions ### Funcionalidad asociada en el Frontend: ## Consideraciones:
1.0
Documentar Github actions en el documento del flujo de trabajo - ## Descripción: Agregar al documento del flujo de trabajo una explicación de los github actions configurados actualmente para los repositorios a los que aplique. ### Resultado: Documento del flujo de trabajo con la documentación de los github actions ### Funcionalidad asociada en el Frontend: ## Consideraciones:
non_defect
documentar github actions en el documento del flujo de trabajo descripción agregar al documento del flujo de trabajo una explicación de los github actions configurados actualmente para los repositorios a los que aplique resultado documento del flujo de trabajo con la documentación de los github actions funcionalidad asociada en el frontend consideraciones
0
30,720
6,257,515,400
IssuesEvent
2017-07-14 13:29:33
hazelcast/hazelcast
https://api.github.com/repos/hazelcast/hazelcast
closed
Premature removal of near cached entries
Team: Client Team: Core Type: Defect
__Given:__ Every write to map/cache generates invalidations and they have a partition-sequence numbers to track missing ones. __Example scenario:__ Think of a scenario that invalidations are generated previously for some other near caches and then later a new near cached map is created. Currently it assumes partition-sequences are starting from 1 but it may not be the case since there can be previously created invalidations for other near caches for the same map before. So partition-sequence can be a bigger number than 1 when new near cached map starts. In this case, when first reconciliation task runs, it can mark all near cached entries as stale even they are not stale.(because, for instance, reconciliation has sequence number 99 for partition-1 but near cache side has 0 for the same partition and reconciliation thinks there are missing invalidations) __Effect:__ In a similar scenario above, all near cached entries before first reconciliation period will be removed. After first reconciliation period, there will be no such problem. Default time for reconciliation period is 60 seconds.
1.0
Premature removal of near cached entries - __Given:__ Every write to map/cache generates invalidations and they have a partition-sequence numbers to track missing ones. __Example scenario:__ Think of a scenario that invalidations are generated previously for some other near caches and then later a new near cached map is created. Currently it assumes partition-sequences are starting from 1 but it may not be the case since there can be previously created invalidations for other near caches for the same map before. So partition-sequence can be a bigger number than 1 when new near cached map starts. In this case, when first reconciliation task runs, it can mark all near cached entries as stale even they are not stale.(because, for instance, reconciliation has sequence number 99 for partition-1 but near cache side has 0 for the same partition and reconciliation thinks there are missing invalidations) __Effect:__ In a similar scenario above, all near cached entries before first reconciliation period will be removed. After first reconciliation period, there will be no such problem. Default time for reconciliation period is 60 seconds.
defect
premature removal of near cached entries given every write to map cache generates invalidations and they have a partition sequence numbers to track missing ones example scenario think of a scenario that invalidations are generated previously for some other near caches and then later a new near cached map is created currently it assumes partition sequences are starting from but it may not be the case since there can be previously created invalidations for other near caches for the same map before so partition sequence can be a bigger number than when new near cached map starts in this case when first reconciliation task runs it can mark all near cached entries as stale even they are not stale because for instance reconciliation has sequence number for partition but near cache side has for the same partition and reconciliation thinks there are missing invalidations effect in a similar scenario above all near cached entries before first reconciliation period will be removed after first reconciliation period there will be no such problem default time for reconciliation period is seconds
1
40,752
10,143,633,744
IssuesEvent
2019-08-04 13:55:27
scipy/scipy
https://api.github.com/repos/scipy/scipy
closed
BUG: sparse: reshape method allows a shape containing an arbitrary number of 1s.
defect scipy.sparse
The `reshape` method of the sparse matrix classes allows shapes with more than two dimensions if some of the lengths are 1. For example, ``` In [24]: from scipy.sparse import csr_matrix In [25]: a = csr_matrix(np.eye(4)) In [26]: a.reshape(2, 1, 8, 1, 1) Out[26]: <2x8 sparse matrix of type '<class 'numpy.float64'>' with 4 stored elements in COOrdinate format> ``` That should raise an error. Those 1s should not be silently ignored. This was checked for the current master branch: ``` In [27]: import scipy In [28]: scipy.__version__ Out[28]: '1.3.0.dev0+4fd82c0' ```
1.0
BUG: sparse: reshape method allows a shape containing an arbitrary number of 1s. - The `reshape` method of the sparse matrix classes allows shapes with more than two dimensions if some of the lengths are 1. For example, ``` In [24]: from scipy.sparse import csr_matrix In [25]: a = csr_matrix(np.eye(4)) In [26]: a.reshape(2, 1, 8, 1, 1) Out[26]: <2x8 sparse matrix of type '<class 'numpy.float64'>' with 4 stored elements in COOrdinate format> ``` That should raise an error. Those 1s should not be silently ignored. This was checked for the current master branch: ``` In [27]: import scipy In [28]: scipy.__version__ Out[28]: '1.3.0.dev0+4fd82c0' ```
defect
bug sparse reshape method allows a shape containing an arbitrary number of the reshape method of the sparse matrix classes allows shapes with more than two dimensions if some of the lengths are for example in from scipy sparse import csr matrix in a csr matrix np eye in a reshape out with stored elements in coordinate format that should raise an error those should not be silently ignored this was checked for the current master branch in import scipy in scipy version out
1
47,328
13,056,122,457
IssuesEvent
2020-07-30 03:43:30
icecube-trac/tix2
https://api.github.com/repos/icecube-trac/tix2
closed
bad description of 6599 (Trac #351)
Migrated from Trac combo simulation defect
The energy range on the SimpProd webpage, http://internal.icecube.wisc.edu/simulation/dataset/6599. The energy range is listed as 10^3.0^ - 10^9^, but is actually 10^1.0^ - 10^9^. Migrated from https://code.icecube.wisc.edu/ticket/351 ```json { "status": "closed", "changetime": "2012-02-15T23:09:40", "description": "The energy range on the SimpProd webpage, http://internal.icecube.wisc.edu/simulation/dataset/6599. The energy range is listed as 10^3.0^ - 10^9^, but is actually 10^1.0^ - 10^9^. ", "reporter": "icecube", "cc": "", "resolution": "invalid", "_ts": "1329347380000000", "component": "combo simulation", "summary": "bad description of 6599", "priority": "minor", "keywords": "", "time": "2012-02-03T20:28:34", "milestone": "", "owner": "olivas", "type": "defect" } ```
1.0
bad description of 6599 (Trac #351) - The energy range on the SimpProd webpage, http://internal.icecube.wisc.edu/simulation/dataset/6599. The energy range is listed as 10^3.0^ - 10^9^, but is actually 10^1.0^ - 10^9^. Migrated from https://code.icecube.wisc.edu/ticket/351 ```json { "status": "closed", "changetime": "2012-02-15T23:09:40", "description": "The energy range on the SimpProd webpage, http://internal.icecube.wisc.edu/simulation/dataset/6599. The energy range is listed as 10^3.0^ - 10^9^, but is actually 10^1.0^ - 10^9^. ", "reporter": "icecube", "cc": "", "resolution": "invalid", "_ts": "1329347380000000", "component": "combo simulation", "summary": "bad description of 6599", "priority": "minor", "keywords": "", "time": "2012-02-03T20:28:34", "milestone": "", "owner": "olivas", "type": "defect" } ```
defect
bad description of trac the energy range on the simpprod webpage the energy range is listed as but is actually migrated from json status closed changetime description the energy range on the simpprod webpage the energy range is listed as but is actually reporter icecube cc resolution invalid ts component combo simulation summary bad description of priority minor keywords time milestone owner olivas type defect
1
141,239
11,405,646,339
IssuesEvent
2020-01-31 12:37:01
ubtue/DatenProbleme
https://api.github.com/repos/ubtue/DatenProbleme
closed
Transkription arabischer Schriftzeichen_8
Zotero_PICA ready for testing
**URL** https://www.jstor.org/stable/i271906?refreqid=excelsior%3Aa567f27e05847789c030214eb773f1e9 **Ausführliche Problembeschreibung** Das Schriftzeichen ẓ (z mit Unterpunkt) wird nicht übertragen, sondern durch ? ersetzt.
1.0
Transkription arabischer Schriftzeichen_8 - **URL** https://www.jstor.org/stable/i271906?refreqid=excelsior%3Aa567f27e05847789c030214eb773f1e9 **Ausführliche Problembeschreibung** Das Schriftzeichen ẓ (z mit Unterpunkt) wird nicht übertragen, sondern durch ? ersetzt.
non_defect
transkription arabischer schriftzeichen url ausführliche problembeschreibung das schriftzeichen ẓ z mit unterpunkt wird nicht übertragen sondern durch ersetzt
0
626,073
19,784,564,956
IssuesEvent
2022-01-18 04:07:37
lokka30/LevelledMobs
https://api.github.com/repos/lokka30/LevelledMobs
closed
Adding a distance mechanic to TARGETED nametag visibility
type: improvement priority: normal
suggested by Zer0#7111 If possible add a configurable range values to nametag visibility system. Perhaps would be applicable to TARGETED, TRACKING and ATTACKED. If that's not possible or doesn't make sense then just the TARGETED value.
1.0
Adding a distance mechanic to TARGETED nametag visibility - suggested by Zer0#7111 If possible add a configurable range values to nametag visibility system. Perhaps would be applicable to TARGETED, TRACKING and ATTACKED. If that's not possible or doesn't make sense then just the TARGETED value.
non_defect
adding a distance mechanic to targeted nametag visibility suggested by if possible add a configurable range values to nametag visibility system perhaps would be applicable to targeted tracking and attacked if that s not possible or doesn t make sense then just the targeted value
0
15,137
2,611,312,460
IssuesEvent
2015-02-27 03:01:47
mavlink/qgroundcontrol
https://api.github.com/repos/mavlink/qgroundcontrol
closed
Deleting a UAS in the UASView after disconnecting crashes QGC
priority-important
Maybe remove the right-click "Delete" option should be removed? I know there has been some action to not allow removing UASes. Setting this to important since it's a crasher, at least on x64 Linux.
1.0
Deleting a UAS in the UASView after disconnecting crashes QGC - Maybe remove the right-click "Delete" option should be removed? I know there has been some action to not allow removing UASes. Setting this to important since it's a crasher, at least on x64 Linux.
non_defect
deleting a uas in the uasview after disconnecting crashes qgc maybe remove the right click delete option should be removed i know there has been some action to not allow removing uases setting this to important since it s a crasher at least on linux
0