Unnamed: 0
int64 0
832k
| id
float64 2.49B
32.1B
| type
stringclasses 1
value | created_at
stringlengths 19
19
| repo
stringlengths 5
112
| repo_url
stringlengths 34
141
| action
stringclasses 3
values | title
stringlengths 1
757
| labels
stringlengths 4
664
| body
stringlengths 3
261k
| index
stringclasses 10
values | text_combine
stringlengths 96
261k
| label
stringclasses 2
values | text
stringlengths 96
232k
| binary_label
int64 0
1
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
469,492
| 13,518,940,780
|
IssuesEvent
|
2020-09-15 00:33:08
|
grpc/grpc
|
https://api.github.com/repos/grpc/grpc
|
closed
|
iOS if .proto has same object , like 2 .proto also has 'base' element, that will be errors, duplicate symbols for architecture x86_64
|
disposition/stale kind/question lang/ObjC platform/iOS priority/P3
|
iOS if .proto has same object , like 2 .proto also has 'base' , that will be errors, duplicate symbols for architecture x86_64
|
1.0
|
iOS if .proto has same object , like 2 .proto also has 'base' element, that will be errors, duplicate symbols for architecture x86_64 - iOS if .proto has same object , like 2 .proto also has 'base' , that will be errors, duplicate symbols for architecture x86_64
|
non_defect
|
ios if proto has same object like proto also has base element that will be errors duplicate symbols for architecture ios if proto has same object like proto also has base that will be errors duplicate symbols for architecture
| 0
|
189,649
| 22,047,083,817
|
IssuesEvent
|
2022-05-30 03:51:38
|
Trinadh465/device_renesas_kernel_AOSP10_r33_CVE-2022-0492
|
https://api.github.com/repos/Trinadh465/device_renesas_kernel_AOSP10_r33_CVE-2022-0492
|
closed
|
CVE-2020-27815 (High) detected in linuxlinux-4.19.88 - autoclosed
|
security vulnerability
|
## CVE-2020-27815 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linuxlinux-4.19.88</b></p></summary>
<p>
<p>The Linux Kernel</p>
<p>Library home page: <a href=https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux>https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux</a></p>
<p>Found in HEAD commit: <a href="https://github.com/Trinadh465/device_renesas_kernel_AOSP10_r33_CVE-2022-0492/commit/8d2169763c8858bce8d07fbb569f01ef9b30383b">8d2169763c8858bce8d07fbb569f01ef9b30383b</a></p>
<p>Found in base branch: <b>master</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (3)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/linux-4.19.72/fs/jfs/jfs_dmap.h</b>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/linux-4.19.72/fs/jfs/jfs_dmap.h</b>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/linux-4.19.72/fs/jfs/jfs_dmap.h</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
A flaw was found in the JFS filesystem code in the Linux Kernel which allows a local attacker with the ability to set extended attributes to panic the system, causing memory corruption or escalating privileges. The highest threat from this vulnerability is to confidentiality, integrity, as well as system availability.
<p>Publish Date: 2021-05-26
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-27815>CVE-2020-27815</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2020-27815">https://nvd.nist.gov/vuln/detail/CVE-2020-27815</a></p>
<p>Release Date: 2021-05-26</p>
<p>Fix Resolution: linux-libc-headers - 5.13;linux-yocto - 5.4.20+gitAUTOINC+c11911d4d1_f4d7dbafb1,4.8.26+gitAUTOINC+1c60e003c7_27efc3ba68</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2020-27815 (High) detected in linuxlinux-4.19.88 - autoclosed - ## CVE-2020-27815 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linuxlinux-4.19.88</b></p></summary>
<p>
<p>The Linux Kernel</p>
<p>Library home page: <a href=https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux>https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux</a></p>
<p>Found in HEAD commit: <a href="https://github.com/Trinadh465/device_renesas_kernel_AOSP10_r33_CVE-2022-0492/commit/8d2169763c8858bce8d07fbb569f01ef9b30383b">8d2169763c8858bce8d07fbb569f01ef9b30383b</a></p>
<p>Found in base branch: <b>master</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (3)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/linux-4.19.72/fs/jfs/jfs_dmap.h</b>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/linux-4.19.72/fs/jfs/jfs_dmap.h</b>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/linux-4.19.72/fs/jfs/jfs_dmap.h</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
A flaw was found in the JFS filesystem code in the Linux Kernel which allows a local attacker with the ability to set extended attributes to panic the system, causing memory corruption or escalating privileges. The highest threat from this vulnerability is to confidentiality, integrity, as well as system availability.
<p>Publish Date: 2021-05-26
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-27815>CVE-2020-27815</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2020-27815">https://nvd.nist.gov/vuln/detail/CVE-2020-27815</a></p>
<p>Release Date: 2021-05-26</p>
<p>Fix Resolution: linux-libc-headers - 5.13;linux-yocto - 5.4.20+gitAUTOINC+c11911d4d1_f4d7dbafb1,4.8.26+gitAUTOINC+1c60e003c7_27efc3ba68</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_defect
|
cve high detected in linuxlinux autoclosed cve high severity vulnerability vulnerable library linuxlinux the linux kernel library home page a href found in head commit a href found in base branch master vulnerable source files linux fs jfs jfs dmap h linux fs jfs jfs dmap h linux fs jfs jfs dmap h vulnerability details a flaw was found in the jfs filesystem code in the linux kernel which allows a local attacker with the ability to set extended attributes to panic the system causing memory corruption or escalating privileges the highest threat from this vulnerability is to confidentiality integrity as well as system availability publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required low user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution linux libc headers linux yocto gitautoinc gitautoinc step up your open source security game with whitesource
| 0
|
17,576
| 4,173,502,203
|
IssuesEvent
|
2016-06-21 10:46:39
|
ubiquits/toolchain
|
https://api.github.com/repos/ubiquits/toolchain
|
opened
|
Generate server side certificates on initialization for usage by the Auth service
|
comp: cli comp: services effort2: medium (day) needs: documentation priority3: required type: feature
|
On quickstart initialization, certificates should be generated.
Consider https://www.npmjs.com/package/ursa
or https://www.npmjs.com/package/node-rsa
or https://www.npmjs.com/package/node-forge (can generate both rsa and crt keys)
When initialized, the private key MUST NOT be committed (`.gitignore` it) but it should be echoed to the console so that the developer can store it somewhere.
An ssl cert should be generated from the key pair, and the server startup should support https on port 8443 (localhost). Note that when in production, it should actually be http only, as ssl termination should be at the load balancer for performance, as once in the docker weave network, ssl security is superfluous. Consider leaving getting https working for a later release unless there is demand.
As teams will be starting projects at separate times, they should generate their own keys, so the key generation should be checked on startup, just with a confirm if not in production. If in production, refuse to start the server.
|
1.0
|
Generate server side certificates on initialization for usage by the Auth service - On quickstart initialization, certificates should be generated.
Consider https://www.npmjs.com/package/ursa
or https://www.npmjs.com/package/node-rsa
or https://www.npmjs.com/package/node-forge (can generate both rsa and crt keys)
When initialized, the private key MUST NOT be committed (`.gitignore` it) but it should be echoed to the console so that the developer can store it somewhere.
An ssl cert should be generated from the key pair, and the server startup should support https on port 8443 (localhost). Note that when in production, it should actually be http only, as ssl termination should be at the load balancer for performance, as once in the docker weave network, ssl security is superfluous. Consider leaving getting https working for a later release unless there is demand.
As teams will be starting projects at separate times, they should generate their own keys, so the key generation should be checked on startup, just with a confirm if not in production. If in production, refuse to start the server.
|
non_defect
|
generate server side certificates on initialization for usage by the auth service on quickstart initialization certificates should be generated consider or or can generate both rsa and crt keys when initialized the private key must not be committed gitignore it but it should be echoed to the console so that the developer can store it somewhere an ssl cert should be generated from the key pair and the server startup should support https on port localhost note that when in production it should actually be http only as ssl termination should be at the load balancer for performance as once in the docker weave network ssl security is superfluous consider leaving getting https working for a later release unless there is demand as teams will be starting projects at separate times they should generate their own keys so the key generation should be checked on startup just with a confirm if not in production if in production refuse to start the server
| 0
|
71,597
| 23,714,562,859
|
IssuesEvent
|
2022-08-30 10:38:23
|
vector-im/element-web
|
https://api.github.com/repos/vector-im/element-web
|
opened
|
Unable to set up secret storage
|
T-Defect
|
### Steps to reproduce
First I have ran matrix synapse on my local server , there is no valid domain, SSL/TLS
Every thing is fine but when a user wants to register or sign in with web base app (V1.11.2) , user wants to generate or import a key file , there is this error (Unable to set up secret storage)
path: all settings --> security & privacy --> Encryption --> Icon "Setup" --> generate new key --> Download / Copy --> asks user password to verify the user --> Error "Unable to set up secret storage" . Tested in Chrome, Firefox, .etc
This Problem is only on web base app, the desktop apps are OK.
### Outcome
#### What did you expect?
I expected to say every thing fine or any thing like this , exept Unable to set up secret storage
#### What happened instead?
It just says Unable to set up secret storage with two icons, Try again & cancel
my home server conf
```
erver_name: "matrix.geek.local"
pid_file: "/var/run/matrix-synapse.pid"
enable_registration: true
session_lifetime: 24h
registration_shared_secret: XXXX
#enable_registration_without_verification: true
public_baseurl: http://matrix.geek.local/
web_client_location: http://element.geek.local/
listeners:
- port: 8008
tls: false
type: http
x_forwarded: true
bind_addresses: ['127.0.0.1']
resources:
- names: [client, federation]
compress: true
database:
name: sqlite3
args:
database: /var/lib/matrix-synapse/homeserver.db
macaroon_secret_key: "XXXX"
form_secret: "XXXX"
log_config: "/etc/matrix-synapse/log.yaml"
report_stats: true
media_store_path: /var/lib/matrix-synapse/media
signing_key_path: "/etc/matrix-synapse/homeserver.signing.key"
trusted_key_servers:
- server_name: "http://localhost:8090"
registrations_require_3pid:
- email
enable_3pid_lookup: true
registration_requires_token: false
allowed_local_3pids:
- medium: email
pattern: '^[^@]+@example\.local$'
email:
smtp_host: mail.example.local
smtp_port: 25
force_tls: false
require_transport_security: false
enable_tls: false
notif_from: "Your Friendly %(app)s homeserver <element-noreply@pep.co.ir>"
app_name: pep_matrix
enable_notifs: true
notif_for_new_users: true
client_base_url: "http://element.geek.local"
validation_token_lifetime: 15m
invite_client_location: http://element.geek.local
subjects:
...
password_config:
enabled: true
localdb_enabled: false
policy:
enabled: true
...
modules:
- module: "ldap_auth_provider.LdapAuthProviderModule"
config:
enabled: true
...
password_providers:
- module: "rest_auth_provider.RestAuthProvider"
config:
endpoint: "http://localhost:8090"
```
my ma1sd conf
```
matrix:
domain: 'matrix.geek.local'
v1: false
v2: true
key:
path: '/var/lib/ma1sd/keys'
storage:
provider:
sqlite:
database: '/etc/ma1sd/ma1sd.db'
synapseSql:
enabled: true
connection: '/var/lib/matrix-synapse/homeserver.db'
threepid:
medium:
email:
domain:
whitelist:
- '*@example.local'
identity:
from: 'test@example.local'
connectors:
smtp:
host: 'mail.example.local'
tls: 0
port: 25
ldap:
enabled: true
lookup: true # hash lookup
activeDirectory: true
mode: "search"
defaultDomain: 'example.local'
connection:
host: 'XXXX'
port: 389
bindDn: 'test@example.local'
bindPassword: 'XXXX'
baseDNs:
- 'OU=test,DC=example,DC=local'
attribute:
mail: "email"
name: "DisplayName"
dns:
overwrite:
homeserver:
client:
- name: 'matrix.geek.local'
value: 'http://localhost:8008'
config:
policy:
registration:
username:
enforceLowercase: false
logging:
root: info
app: info
requests: false
```








log file:
[rageshake(2).zip](https://github.com/vector-im/element-web/files/9452095/rageshake.2.zip)
### Operating system
Ubuntu 20.04, Ubuntu 18.04, Windows 10
### Browser information
Version 103.0.5060.134 (Official Build) (64-bit), Firefox 104.0 (64-bit)
### URL for webapp
(http://element.geek.local) Local --> Unable to be published on internet
### Application version
V1.11.2
### Homeserver
(http://matrix.geek.local) Local --> Unable to be published on internet
### Will you send logs?
No
|
1.0
|
Unable to set up secret storage - ### Steps to reproduce
First I have ran matrix synapse on my local server , there is no valid domain, SSL/TLS
Every thing is fine but when a user wants to register or sign in with web base app (V1.11.2) , user wants to generate or import a key file , there is this error (Unable to set up secret storage)
path: all settings --> security & privacy --> Encryption --> Icon "Setup" --> generate new key --> Download / Copy --> asks user password to verify the user --> Error "Unable to set up secret storage" . Tested in Chrome, Firefox, .etc
This Problem is only on web base app, the desktop apps are OK.
### Outcome
#### What did you expect?
I expected to say every thing fine or any thing like this , exept Unable to set up secret storage
#### What happened instead?
It just says Unable to set up secret storage with two icons, Try again & cancel
my home server conf
```
erver_name: "matrix.geek.local"
pid_file: "/var/run/matrix-synapse.pid"
enable_registration: true
session_lifetime: 24h
registration_shared_secret: XXXX
#enable_registration_without_verification: true
public_baseurl: http://matrix.geek.local/
web_client_location: http://element.geek.local/
listeners:
- port: 8008
tls: false
type: http
x_forwarded: true
bind_addresses: ['127.0.0.1']
resources:
- names: [client, federation]
compress: true
database:
name: sqlite3
args:
database: /var/lib/matrix-synapse/homeserver.db
macaroon_secret_key: "XXXX"
form_secret: "XXXX"
log_config: "/etc/matrix-synapse/log.yaml"
report_stats: true
media_store_path: /var/lib/matrix-synapse/media
signing_key_path: "/etc/matrix-synapse/homeserver.signing.key"
trusted_key_servers:
- server_name: "http://localhost:8090"
registrations_require_3pid:
- email
enable_3pid_lookup: true
registration_requires_token: false
allowed_local_3pids:
- medium: email
pattern: '^[^@]+@example\.local$'
email:
smtp_host: mail.example.local
smtp_port: 25
force_tls: false
require_transport_security: false
enable_tls: false
notif_from: "Your Friendly %(app)s homeserver <element-noreply@pep.co.ir>"
app_name: pep_matrix
enable_notifs: true
notif_for_new_users: true
client_base_url: "http://element.geek.local"
validation_token_lifetime: 15m
invite_client_location: http://element.geek.local
subjects:
...
password_config:
enabled: true
localdb_enabled: false
policy:
enabled: true
...
modules:
- module: "ldap_auth_provider.LdapAuthProviderModule"
config:
enabled: true
...
password_providers:
- module: "rest_auth_provider.RestAuthProvider"
config:
endpoint: "http://localhost:8090"
```
my ma1sd conf
```
matrix:
domain: 'matrix.geek.local'
v1: false
v2: true
key:
path: '/var/lib/ma1sd/keys'
storage:
provider:
sqlite:
database: '/etc/ma1sd/ma1sd.db'
synapseSql:
enabled: true
connection: '/var/lib/matrix-synapse/homeserver.db'
threepid:
medium:
email:
domain:
whitelist:
- '*@example.local'
identity:
from: 'test@example.local'
connectors:
smtp:
host: 'mail.example.local'
tls: 0
port: 25
ldap:
enabled: true
lookup: true # hash lookup
activeDirectory: true
mode: "search"
defaultDomain: 'example.local'
connection:
host: 'XXXX'
port: 389
bindDn: 'test@example.local'
bindPassword: 'XXXX'
baseDNs:
- 'OU=test,DC=example,DC=local'
attribute:
mail: "email"
name: "DisplayName"
dns:
overwrite:
homeserver:
client:
- name: 'matrix.geek.local'
value: 'http://localhost:8008'
config:
policy:
registration:
username:
enforceLowercase: false
logging:
root: info
app: info
requests: false
```








log file:
[rageshake(2).zip](https://github.com/vector-im/element-web/files/9452095/rageshake.2.zip)
### Operating system
Ubuntu 20.04, Ubuntu 18.04, Windows 10
### Browser information
Version 103.0.5060.134 (Official Build) (64-bit), Firefox 104.0 (64-bit)
### URL for webapp
(http://element.geek.local) Local --> Unable to be published on internet
### Application version
V1.11.2
### Homeserver
(http://matrix.geek.local) Local --> Unable to be published on internet
### Will you send logs?
No
|
defect
|
unable to set up secret storage steps to reproduce first i have ran matrix synapse on my local server there is no valid domain ssl tls every thing is fine but when a user wants to register or sign in with web base app user wants to generate or import a key file there is this error unable to set up secret storage path all settings security privacy encryption icon setup generate new key download copy asks user password to verify the user error unable to set up secret storage tested in chrome firefox etc this problem is only on web base app the desktop apps are ok outcome what did you expect i expected to say every thing fine or any thing like this exept unable to set up secret storage what happened instead it just says unable to set up secret storage with two icons try again cancel my home server conf erver name matrix geek local pid file var run matrix synapse pid enable registration true session lifetime registration shared secret xxxx enable registration without verification true public baseurl web client location listeners port tls false type http x forwarded true bind addresses resources names compress true database name args database var lib matrix synapse homeserver db macaroon secret key xxxx form secret xxxx log config etc matrix synapse log yaml report stats true media store path var lib matrix synapse media signing key path etc matrix synapse homeserver signing key trusted key servers server name registrations require email enable lookup true registration requires token false allowed local medium email pattern example local email smtp host mail example local smtp port force tls false require transport security false enable tls false notif from your friendly app s homeserver app name pep matrix enable notifs true notif for new users true client base url validation token lifetime invite client location subjects password config enabled true localdb enabled false policy enabled true modules module ldap auth provider ldapauthprovidermodule config enabled true password providers module rest auth provider restauthprovider config endpoint my conf matrix domain matrix geek local false true key path var lib keys storage provider sqlite database etc db synapsesql enabled true connection var lib matrix synapse homeserver db threepid medium email domain whitelist example local identity from test example local connectors smtp host mail example local tls port ldap enabled true lookup true hash lookup activedirectory true mode search defaultdomain example local connection host xxxx port binddn test example local bindpassword xxxx basedns ou test dc example dc local attribute mail email name displayname dns overwrite homeserver client name matrix geek local value config policy registration username enforcelowercase false logging root info app info requests false log file operating system ubuntu ubuntu windows browser information version official build bit firefox bit url for webapp local unable to be published on internet application version homeserver local unable to be published on internet will you send logs no
| 1
|
63,341
| 26,358,431,877
|
IssuesEvent
|
2023-01-11 11:30:45
|
GovernIB/ripea
|
https://api.github.com/repos/GovernIB/ripea
|
closed
|
Afegir el servei de consulta d'estar al corrent de pagament amb la Seguretat Social de PINBAL
|
Tipus:Nova_Funcionalitat Prioritat:Normal Lloc:WebServices
|
Afegir el servei "Q2827003ATGSS001 Estar al corriente de pago con la Seguridad Social" al llistat de serveis SCSP que es poden consultar des de RIPEA a través de la llibreria que facilita les consultes SCSP. Depen de GovernIB/pinbal#155
|
1.0
|
Afegir el servei de consulta d'estar al corrent de pagament amb la Seguretat Social de PINBAL - Afegir el servei "Q2827003ATGSS001 Estar al corriente de pago con la Seguridad Social" al llistat de serveis SCSP que es poden consultar des de RIPEA a través de la llibreria que facilita les consultes SCSP. Depen de GovernIB/pinbal#155
|
non_defect
|
afegir el servei de consulta d estar al corrent de pagament amb la seguretat social de pinbal afegir el servei estar al corriente de pago con la seguridad social al llistat de serveis scsp que es poden consultar des de ripea a través de la llibreria que facilita les consultes scsp depen de governib pinbal
| 0
|
42,030
| 10,755,318,490
|
IssuesEvent
|
2019-10-31 08:53:15
|
jOOQ/jOOQ
|
https://api.github.com/repos/jOOQ/jOOQ
|
closed
|
Creating tables using DSLContext.ddl() converts VARBINARY columns to TEXT in MySQL
|
C: DB: Aurora MySQL C: DB: MariaDB C: DB: MemSQL C: DB: MySQL C: Functionality E: All Editions P: High T: Defect
|
### Behavior:
When creating tables with DSLContext.ddl(), columns with type SQLDataType.VARBINARY are instead created with type TEXT in the database. I expect the VARBINARY type to be preserved.
### Repro:
See https://github.com/trdesilva/jOOQ-mcve
@Test
public void mcveTest() {
ctx.ddl(Testtable.TESTTABLE, new DDLExportConfiguration().createTableIfNotExists(true)).executeBatch();
byte[] bytes = new byte[256];
for (int i = 0; i < bytes.length; i++) {
bytes[i] = (byte)(i - Byte.MAX_VALUE);
}
// this throws with the following on MySQL 5.6.43, but not 5.6.39:
// org.jooq.exception.DataAccessException: SQL [insert into `TestTable` (`Foo`, `Bar`) values (?, ?)]; Incorrect string value: '\x81\x82\x83\x84\x85\x86...' for column 'Foo' at row 1
ctx.insertInto(Testtable.TESTTABLE).values(bytes, bytes).execute();
}
### Versions:
- jOOQ: 3.12.2
- Java: OpenJDK 11.0.4+11
- Database (include vendor): Oracle MySQL 5.6.39/5.6.43 Community Server
- OS: Ubuntu 16.04/AWS RDS
- JDBC Driver (include name if inofficial driver): mysql-connector-java 5.1.46
|
1.0
|
Creating tables using DSLContext.ddl() converts VARBINARY columns to TEXT in MySQL - ### Behavior:
When creating tables with DSLContext.ddl(), columns with type SQLDataType.VARBINARY are instead created with type TEXT in the database. I expect the VARBINARY type to be preserved.
### Repro:
See https://github.com/trdesilva/jOOQ-mcve
@Test
public void mcveTest() {
ctx.ddl(Testtable.TESTTABLE, new DDLExportConfiguration().createTableIfNotExists(true)).executeBatch();
byte[] bytes = new byte[256];
for (int i = 0; i < bytes.length; i++) {
bytes[i] = (byte)(i - Byte.MAX_VALUE);
}
// this throws with the following on MySQL 5.6.43, but not 5.6.39:
// org.jooq.exception.DataAccessException: SQL [insert into `TestTable` (`Foo`, `Bar`) values (?, ?)]; Incorrect string value: '\x81\x82\x83\x84\x85\x86...' for column 'Foo' at row 1
ctx.insertInto(Testtable.TESTTABLE).values(bytes, bytes).execute();
}
### Versions:
- jOOQ: 3.12.2
- Java: OpenJDK 11.0.4+11
- Database (include vendor): Oracle MySQL 5.6.39/5.6.43 Community Server
- OS: Ubuntu 16.04/AWS RDS
- JDBC Driver (include name if inofficial driver): mysql-connector-java 5.1.46
|
defect
|
creating tables using dslcontext ddl converts varbinary columns to text in mysql behavior when creating tables with dslcontext ddl columns with type sqldatatype varbinary are instead created with type text in the database i expect the varbinary type to be preserved repro see test public void mcvetest ctx ddl testtable testtable new ddlexportconfiguration createtableifnotexists true executebatch byte bytes new byte for int i i bytes length i bytes byte i byte max value this throws with the following on mysql but not org jooq exception dataaccessexception sql incorrect string value for column foo at row ctx insertinto testtable testtable values bytes bytes execute versions jooq java openjdk database include vendor oracle mysql community server os ubuntu aws rds jdbc driver include name if inofficial driver mysql connector java
| 1
|
164,043
| 12,758,334,969
|
IssuesEvent
|
2020-06-29 01:53:46
|
elastic/kibana
|
https://api.github.com/repos/elastic/kibana
|
opened
|
Failing test: X-Pack Jest Tests.x-pack/plugins/security_solution/public/management/pages/endpoint_hosts/view - when on the hosts page when there is no selected host in the url should not show the flyout
|
failed-test
|
A test failed on a tracked branch
```
Error: thrown: "Exceeded timeout of 5000ms for a test.
Use jest.setTimeout(newTimeout) to increase the timeout value, if this is a long-running test."
at describe (/dev/shm/workspace/kibana/x-pack/plugins/security_solution/public/management/pages/endpoint_hosts/view/index.test.tsx:50:5)
at _dispatchDescribe (/dev/shm/workspace/kibana/node_modules/jest-circus/build/index.js:67:26)
at describe (/dev/shm/workspace/kibana/node_modules/jest-circus/build/index.js:30:5)
at Object.<anonymous>.describe (/dev/shm/workspace/kibana/x-pack/plugins/security_solution/public/management/pages/endpoint_hosts/view/index.test.tsx:49:3)
at _dispatchDescribe (/dev/shm/workspace/kibana/node_modules/jest-circus/build/index.js:67:26)
at describe (/dev/shm/workspace/kibana/node_modules/jest-circus/build/index.js:30:5)
at Object.<anonymous> (/dev/shm/workspace/kibana/x-pack/plugins/security_solution/public/management/pages/endpoint_hosts/view/index.test.tsx:23:1)
at Runtime._execModule (/dev/shm/workspace/kibana/node_modules/jest-runtime/build/index.js:1205:24)
at Runtime._loadModule (/dev/shm/workspace/kibana/node_modules/jest-runtime/build/index.js:805:12)
at Runtime.requireModule (/dev/shm/workspace/kibana/node_modules/jest-runtime/build/index.js:662:10)
at jestAdapter (/dev/shm/workspace/kibana/node_modules/jest-circus/build/legacy-code-todo-rewrite/jestAdapter.js:145:13)
at process._tickCallback (internal/process/next_tick.js:68:7)
```
First failure: [Jenkins Build](https://kibana-ci.elastic.co/job/elastic+kibana+7.x/6201/)
<!-- kibanaCiData = {"failed-test":{"test.class":"X-Pack Jest Tests.x-pack/plugins/security_solution/public/management/pages/endpoint_hosts/view","test.name":"when on the hosts page when there is no selected host in the url should not show the flyout","test.failCount":1}} -->
|
1.0
|
Failing test: X-Pack Jest Tests.x-pack/plugins/security_solution/public/management/pages/endpoint_hosts/view - when on the hosts page when there is no selected host in the url should not show the flyout - A test failed on a tracked branch
```
Error: thrown: "Exceeded timeout of 5000ms for a test.
Use jest.setTimeout(newTimeout) to increase the timeout value, if this is a long-running test."
at describe (/dev/shm/workspace/kibana/x-pack/plugins/security_solution/public/management/pages/endpoint_hosts/view/index.test.tsx:50:5)
at _dispatchDescribe (/dev/shm/workspace/kibana/node_modules/jest-circus/build/index.js:67:26)
at describe (/dev/shm/workspace/kibana/node_modules/jest-circus/build/index.js:30:5)
at Object.<anonymous>.describe (/dev/shm/workspace/kibana/x-pack/plugins/security_solution/public/management/pages/endpoint_hosts/view/index.test.tsx:49:3)
at _dispatchDescribe (/dev/shm/workspace/kibana/node_modules/jest-circus/build/index.js:67:26)
at describe (/dev/shm/workspace/kibana/node_modules/jest-circus/build/index.js:30:5)
at Object.<anonymous> (/dev/shm/workspace/kibana/x-pack/plugins/security_solution/public/management/pages/endpoint_hosts/view/index.test.tsx:23:1)
at Runtime._execModule (/dev/shm/workspace/kibana/node_modules/jest-runtime/build/index.js:1205:24)
at Runtime._loadModule (/dev/shm/workspace/kibana/node_modules/jest-runtime/build/index.js:805:12)
at Runtime.requireModule (/dev/shm/workspace/kibana/node_modules/jest-runtime/build/index.js:662:10)
at jestAdapter (/dev/shm/workspace/kibana/node_modules/jest-circus/build/legacy-code-todo-rewrite/jestAdapter.js:145:13)
at process._tickCallback (internal/process/next_tick.js:68:7)
```
First failure: [Jenkins Build](https://kibana-ci.elastic.co/job/elastic+kibana+7.x/6201/)
<!-- kibanaCiData = {"failed-test":{"test.class":"X-Pack Jest Tests.x-pack/plugins/security_solution/public/management/pages/endpoint_hosts/view","test.name":"when on the hosts page when there is no selected host in the url should not show the flyout","test.failCount":1}} -->
|
non_defect
|
failing test x pack jest tests x pack plugins security solution public management pages endpoint hosts view when on the hosts page when there is no selected host in the url should not show the flyout a test failed on a tracked branch error thrown exceeded timeout of for a test use jest settimeout newtimeout to increase the timeout value if this is a long running test at describe dev shm workspace kibana x pack plugins security solution public management pages endpoint hosts view index test tsx at dispatchdescribe dev shm workspace kibana node modules jest circus build index js at describe dev shm workspace kibana node modules jest circus build index js at object describe dev shm workspace kibana x pack plugins security solution public management pages endpoint hosts view index test tsx at dispatchdescribe dev shm workspace kibana node modules jest circus build index js at describe dev shm workspace kibana node modules jest circus build index js at object dev shm workspace kibana x pack plugins security solution public management pages endpoint hosts view index test tsx at runtime execmodule dev shm workspace kibana node modules jest runtime build index js at runtime loadmodule dev shm workspace kibana node modules jest runtime build index js at runtime requiremodule dev shm workspace kibana node modules jest runtime build index js at jestadapter dev shm workspace kibana node modules jest circus build legacy code todo rewrite jestadapter js at process tickcallback internal process next tick js first failure
| 0
|
747,464
| 26,084,925,588
|
IssuesEvent
|
2022-12-26 00:36:02
|
Lincoln-LM/sv-live-map
|
https://api.github.com/repos/Lincoln-LM/sv-live-map
|
opened
|
Auto host (discord integration!) and further automation
|
enhancement low priority
|
Finding shiny dens goes hand-in-hand with actually hosting said dens, supporting auto-host functionality would be a nice enhancement for those using sv-live-map for the purpose of hosting for others. In addition, this may pave way for an automation framework that could be used for the likes of auto-shiny hunting w/overworld scan, and auto outbreak resetting.
|
1.0
|
Auto host (discord integration!) and further automation - Finding shiny dens goes hand-in-hand with actually hosting said dens, supporting auto-host functionality would be a nice enhancement for those using sv-live-map for the purpose of hosting for others. In addition, this may pave way for an automation framework that could be used for the likes of auto-shiny hunting w/overworld scan, and auto outbreak resetting.
|
non_defect
|
auto host discord integration and further automation finding shiny dens goes hand in hand with actually hosting said dens supporting auto host functionality would be a nice enhancement for those using sv live map for the purpose of hosting for others in addition this may pave way for an automation framework that could be used for the likes of auto shiny hunting w overworld scan and auto outbreak resetting
| 0
|
18,780
| 3,086,962,729
|
IssuesEvent
|
2015-08-25 08:28:54
|
jserranohidalgo/test-trac
|
https://api.github.com/repos/jserranohidalgo/test-trac
|
opened
|
Contextos de declaración huérfanos
|
P: trivial T: defect
|
**Reported by jserrano on 7 May 2014 11:09 UTC**
Se da un alta desde la interfaz, y primero se hace un setup de la declaracin (de alta, por ejemplo). Si despus falla el alta (21), el contexto no se cierra. Opciones:
* Se hace todo en el mismo attempt
** attempt(for{ setup <- Say(SetUp(..),...); NewEntity(interaccion,ag1,...) <- react; _ <- Say(Alta(...),interaccion,ag1)})
** attempt(Say(SetUpAndAlta(..))
* Se sigue haciendo en dos attempts
** La interfaz hace un close
** No se hace nada, y cuando se consolide el proceso, se cierra
|
1.0
|
Contextos de declaración huérfanos - **Reported by jserrano on 7 May 2014 11:09 UTC**
Se da un alta desde la interfaz, y primero se hace un setup de la declaracin (de alta, por ejemplo). Si despus falla el alta (21), el contexto no se cierra. Opciones:
* Se hace todo en el mismo attempt
** attempt(for{ setup <- Say(SetUp(..),...); NewEntity(interaccion,ag1,...) <- react; _ <- Say(Alta(...),interaccion,ag1)})
** attempt(Say(SetUpAndAlta(..))
* Se sigue haciendo en dos attempts
** La interfaz hace un close
** No se hace nada, y cuando se consolide el proceso, se cierra
|
defect
|
contextos de declaración huérfanos reported by jserrano on may utc se da un alta desde la interfaz y primero se hace un setup de la declaracin de alta por ejemplo si despus falla el alta el contexto no se cierra opciones se hace todo en el mismo attempt attempt for setup say setup newentity interaccion react say alta interaccion attempt say setupandalta se sigue haciendo en dos attempts la interfaz hace un close no se hace nada y cuando se consolide el proceso se cierra
| 1
|
39,241
| 9,334,675,642
|
IssuesEvent
|
2019-03-28 16:49:33
|
PowerDNS/pdns
|
https://api.github.com/repos/PowerDNS/pdns
|
opened
|
rec: protobuf messages fields are not updated after being cached
|
defect rec
|
<!-- Hi! Thanks for filing an issue. It will be read with care by human beings. Can we ask you to please fill out this template and not simply demand new features or send in complaints? Thanks! -->
<!-- Also please search the existing issues (both open and closed) to see if your report might be duplicate -->
<!-- Please don't file an issue when you have a support question, send support questions to the mailinglist or ask them on IRC (https://www.powerdns.com/opensource.html) -->
<!-- Tell us what is issue is about -->
- Program: Recursor <!-- delete the ones that do not apply -->
- Issue type: Bug report <!-- delete the one that does not apply -->
### Short description
<!-- Explain in a few sentences what the issue/request is -->
It looks like some fields are not properly updated when the response is taken from the packet cache.
### Description
<!-- Describe as extensively as possible what you want the software to do -->
When a response is read from the cache, and protobuf logging is enabled, it seems that the following protobuf message's fields are not properly updated for new responses : `appliedPolicy` and `tags`.
|
1.0
|
rec: protobuf messages fields are not updated after being cached - <!-- Hi! Thanks for filing an issue. It will be read with care by human beings. Can we ask you to please fill out this template and not simply demand new features or send in complaints? Thanks! -->
<!-- Also please search the existing issues (both open and closed) to see if your report might be duplicate -->
<!-- Please don't file an issue when you have a support question, send support questions to the mailinglist or ask them on IRC (https://www.powerdns.com/opensource.html) -->
<!-- Tell us what is issue is about -->
- Program: Recursor <!-- delete the ones that do not apply -->
- Issue type: Bug report <!-- delete the one that does not apply -->
### Short description
<!-- Explain in a few sentences what the issue/request is -->
It looks like some fields are not properly updated when the response is taken from the packet cache.
### Description
<!-- Describe as extensively as possible what you want the software to do -->
When a response is read from the cache, and protobuf logging is enabled, it seems that the following protobuf message's fields are not properly updated for new responses : `appliedPolicy` and `tags`.
|
defect
|
rec protobuf messages fields are not updated after being cached program recursor issue type bug report short description it looks like some fields are not properly updated when the response is taken from the packet cache description when a response is read from the cache and protobuf logging is enabled it seems that the following protobuf message s fields are not properly updated for new responses appliedpolicy and tags
| 1
|
69,522
| 7,137,474,696
|
IssuesEvent
|
2018-01-23 11:07:47
|
emfoundation/asset-manager
|
https://api.github.com/repos/emfoundation/asset-manager
|
closed
|
On folder list view, the total number of folders is displayed incorrectly in two places
|
bug please test priority-3
|
This is probably because hidden folders are also counted.
|
1.0
|
On folder list view, the total number of folders is displayed incorrectly in two places - This is probably because hidden folders are also counted.
|
non_defect
|
on folder list view the total number of folders is displayed incorrectly in two places this is probably because hidden folders are also counted
| 0
|
144,533
| 11,623,169,764
|
IssuesEvent
|
2020-02-27 08:23:27
|
dasch-swiss/knora-api
|
https://api.github.com/repos/dasch-swiss/knora-api
|
opened
|
Run tests against a single running knora-stack
|
testing
|
Any tests that currently start `knora-api` should run against an externally started knora-stack.
Value proposition:
- allow using these tests as we use them now, but also to point them to any kind of knora-stack installation, and have it thoroughly checked (which we need but are currently missing for testing our "Infrastructure as Code")
- solve the intermittent BindException problem
- run tests a bit faster
|
1.0
|
Run tests against a single running knora-stack - Any tests that currently start `knora-api` should run against an externally started knora-stack.
Value proposition:
- allow using these tests as we use them now, but also to point them to any kind of knora-stack installation, and have it thoroughly checked (which we need but are currently missing for testing our "Infrastructure as Code")
- solve the intermittent BindException problem
- run tests a bit faster
|
non_defect
|
run tests against a single running knora stack any tests that currently start knora api should run against an externally started knora stack value proposition allow using these tests as we use them now but also to point them to any kind of knora stack installation and have it thoroughly checked which we need but are currently missing for testing our infrastructure as code solve the intermittent bindexception problem run tests a bit faster
| 0
|
66,282
| 20,112,902,455
|
IssuesEvent
|
2022-02-07 16:35:51
|
vector-im/element-web
|
https://api.github.com/repos/vector-im/element-web
|
closed
|
PIP window closes when you switch to another room
|
T-Defect S-Minor A-Widgets O-Uncommon Z-FOSDEM
|
### Steps to reproduce
(unfortunately doesn't reproduce reliably, however when the underlying problem seems active, it works all the time )
A new day, opening chat.fosdem.org: no problems;
After closing the tab with an active chat.fosdem.org session (verified) after a few minutes and reopening it:
1. Switch to "Home" selection on the left and then click "Home" to see the fosdem overview page
2. Open FOSDEM 2022 Space on the left
3. Open the "Test Track 1" by clicking the already joined room in the list on the left
4. make sure the "Livestream" widget is pinned
5. Open the stream in PIP mode by using the in-widget controls
6. Click on "Matrix Stand" room
### Outcome
#### What did you expect?
PIP window stays visible
#### What happened instead?
PIP window vanishes
### Operating system
Windows 10 Pro 19044.1469
### Browser information
Edge Version 97.0.1072.76 (Official build) (64-bit)
### URL for webapp
https://chat.fosdem.org/
### Application version
FOSDEM 2022 version: 1.10.1 Olm version: 3.2.8
### Homeserver
matrix.org
### Will you send logs?
Yes -> https://github.com/matrix-org/element-web-rageshakes/issues/10382
|
1.0
|
PIP window closes when you switch to another room - ### Steps to reproduce
(unfortunately doesn't reproduce reliably, however when the underlying problem seems active, it works all the time )
A new day, opening chat.fosdem.org: no problems;
After closing the tab with an active chat.fosdem.org session (verified) after a few minutes and reopening it:
1. Switch to "Home" selection on the left and then click "Home" to see the fosdem overview page
2. Open FOSDEM 2022 Space on the left
3. Open the "Test Track 1" by clicking the already joined room in the list on the left
4. make sure the "Livestream" widget is pinned
5. Open the stream in PIP mode by using the in-widget controls
6. Click on "Matrix Stand" room
### Outcome
#### What did you expect?
PIP window stays visible
#### What happened instead?
PIP window vanishes
### Operating system
Windows 10 Pro 19044.1469
### Browser information
Edge Version 97.0.1072.76 (Official build) (64-bit)
### URL for webapp
https://chat.fosdem.org/
### Application version
FOSDEM 2022 version: 1.10.1 Olm version: 3.2.8
### Homeserver
matrix.org
### Will you send logs?
Yes -> https://github.com/matrix-org/element-web-rageshakes/issues/10382
|
defect
|
pip window closes when you switch to another room steps to reproduce unfortunately doesn t reproduce reliably however when the underlying problem seems active it works all the time a new day opening chat fosdem org no problems after closing the tab with an active chat fosdem org session verified after a few minutes and reopening it switch to home selection on the left and then click home to see the fosdem overview page open fosdem space on the left open the test track by clicking the already joined room in the list on the left make sure the livestream widget is pinned open the stream in pip mode by using the in widget controls click on matrix stand room outcome what did you expect pip window stays visible what happened instead pip window vanishes operating system windows pro browser information edge version official build bit url for webapp application version fosdem version olm version homeserver matrix org will you send logs yes
| 1
|
289,175
| 24,965,861,837
|
IssuesEvent
|
2022-11-01 19:18:58
|
ibm-openbmc/dev
|
https://api.github.com/repos/ibm-openbmc/dev
|
closed
|
Redfish Maintenance Logs
|
Epic prio_low Test on WSP-TAC
|
The request is:
Provides logs for when the hardware or firmware change in the system
Firmware changes are recorded today on FSP
CPU changed from SN xxxx to SN yyyy
Firmware changed from version xxxx to yyyy
Need to flag VPD changes up the stack.
|
1.0
|
Redfish Maintenance Logs - The request is:
Provides logs for when the hardware or firmware change in the system
Firmware changes are recorded today on FSP
CPU changed from SN xxxx to SN yyyy
Firmware changed from version xxxx to yyyy
Need to flag VPD changes up the stack.
|
non_defect
|
redfish maintenance logs the request is provides logs for when the hardware or firmware change in the system firmware changes are recorded today on fsp cpu changed from sn xxxx to sn yyyy firmware changed from version xxxx to yyyy need to flag vpd changes up the stack
| 0
|
57,077
| 15,650,045,256
|
IssuesEvent
|
2021-03-23 08:28:09
|
hazelcast/hazelcast-jet
|
https://api.github.com/repos/hazelcast/hazelcast-jet
|
closed
|
Snapshot Phase 2 may get stuck
|
defect
|
While solving a test problem in #2454, we realized the underlying cause was a snapshot stuck in phase 2. System load seems to have been light immediately before and after the getting stuck event.
|
1.0
|
Snapshot Phase 2 may get stuck - While solving a test problem in #2454, we realized the underlying cause was a snapshot stuck in phase 2. System load seems to have been light immediately before and after the getting stuck event.
|
defect
|
snapshot phase may get stuck while solving a test problem in we realized the underlying cause was a snapshot stuck in phase system load seems to have been light immediately before and after the getting stuck event
| 1
|
330,259
| 10,037,616,403
|
IssuesEvent
|
2019-07-18 13:33:36
|
wrattler/wrattler
|
https://api.github.com/repos/wrattler/wrattler
|
closed
|
[jupyter] Refresh causes code changes to disappear
|
status-priority type-bug
|
JupyterLab refreshes the page when you switch tabs, which makes code changes disappear.
|
1.0
|
[jupyter] Refresh causes code changes to disappear - JupyterLab refreshes the page when you switch tabs, which makes code changes disappear.
|
non_defect
|
refresh causes code changes to disappear jupyterlab refreshes the page when you switch tabs which makes code changes disappear
| 0
|
228,508
| 18,239,343,811
|
IssuesEvent
|
2021-10-01 10:57:18
|
HyphaApp/hypha
|
https://api.github.com/repos/HyphaApp/hypha
|
closed
|
Make 'View Message Log' tab only available to 'Staff Admin' role
|
Type: Enhancement Status: Tested - approved for live ✅ Partner: OTF Priority: Low
|
## User story
It is not obvious to staff what the 'View Message Log' tab is.
## Describe the solution you'd like in Hypha
Make 'View Message Log' tab only available to 'Staff Admin' role.
**Priority**
- Low priority (annoying, would be nice to not see)
**Affected roles**
- Staff
**Ideal deadline**
December 2021
|
1.0
|
Make 'View Message Log' tab only available to 'Staff Admin' role - ## User story
It is not obvious to staff what the 'View Message Log' tab is.
## Describe the solution you'd like in Hypha
Make 'View Message Log' tab only available to 'Staff Admin' role.
**Priority**
- Low priority (annoying, would be nice to not see)
**Affected roles**
- Staff
**Ideal deadline**
December 2021
|
non_defect
|
make view message log tab only available to staff admin role user story it is not obvious to staff what the view message log tab is describe the solution you d like in hypha make view message log tab only available to staff admin role priority low priority annoying would be nice to not see affected roles staff ideal deadline december
| 0
|
12,579
| 2,711,483,019
|
IssuesEvent
|
2015-04-09 06:42:17
|
google/google-api-go-client
|
https://api.github.com/repos/google/google-api-go-client
|
closed
|
YouTube v3 video/upload, setting the snippet fails when non-alphanumeric characters are used
|
new priority-medium type-defect
|
**janbirsacom** on 9 Jun 2014 at 8:58:
```
What steps will reproduce the problem?
1. Use YouTube video upload sample code from:
https://developers.google.com/youtube/v3/docs/videos/insert
2. In video description field, put non-alphanumeric characters like '<3'.
3. Upload a sample video.
What is the expected output? What do you see instead?
Expected: 200, successful upload
Got: 400 Bad Request
What version of the product are you using? On what operating system?
Latest (2ba9f0995cf0215c20ebd6de43a14d70af30fea6)
Please provide any additional information below.
http://stackoverflow.com/questions/24075229/youtube-upload-v3-400-bad-request
```
|
1.0
|
YouTube v3 video/upload, setting the snippet fails when non-alphanumeric characters are used -
**janbirsacom** on 9 Jun 2014 at 8:58:
```
What steps will reproduce the problem?
1. Use YouTube video upload sample code from:
https://developers.google.com/youtube/v3/docs/videos/insert
2. In video description field, put non-alphanumeric characters like '<3'.
3. Upload a sample video.
What is the expected output? What do you see instead?
Expected: 200, successful upload
Got: 400 Bad Request
What version of the product are you using? On what operating system?
Latest (2ba9f0995cf0215c20ebd6de43a14d70af30fea6)
Please provide any additional information below.
http://stackoverflow.com/questions/24075229/youtube-upload-v3-400-bad-request
```
|
defect
|
youtube video upload setting the snippet fails when non alphanumeric characters are used janbirsacom on jun at what steps will reproduce the problem use youtube video upload sample code from in video description field put non alphanumeric characters like upload a sample video what is the expected output what do you see instead expected successful upload got bad request what version of the product are you using on what operating system latest please provide any additional information below
| 1
|
12,475
| 2,700,770,082
|
IssuesEvent
|
2015-04-04 15:06:21
|
cakephp/cakephp
|
https://api.github.com/repos/cakephp/cakephp
|
reopened
|
3.0 - Cookies unable to contain brackets
|
component Defect
|
It's possible this is one of those things that are impossible to fix, but right now I can't use `CookieComponent` to write cookies that have brackets in the name. In my 2.x app there are cookies that are set with the array-style, `CakeCookie[ThisThing]`, but in 3.0 I can't just do this:
```
$this->Cookie->write('CakeCookie[ThisThing]', 10);
```
Trying to use dots, `Cookie->write('CakeCookie.ThisThing', 10)` results in the cookie name being `CakeCookie`, which isn't usable either. (Edit - I'm pretty sure that's as intended, though. Just trying to see how I can get an array-style cookie name.) It seems that it's the `Hash` class:
```
#1
Hash::insert([], 'CakeCookie[ThisThing]', 'hello'); # -> Array ()
#2
Hash::insert([], 'what', 'hello'); # -> Array ( [what] => hello )
```
Should `#1` above be an empty array? I don't know what the special conditions `Hash::insert` are doing with brackets, but `noTokens` in that method seems to be messing this up.
Thanks
|
1.0
|
3.0 - Cookies unable to contain brackets - It's possible this is one of those things that are impossible to fix, but right now I can't use `CookieComponent` to write cookies that have brackets in the name. In my 2.x app there are cookies that are set with the array-style, `CakeCookie[ThisThing]`, but in 3.0 I can't just do this:
```
$this->Cookie->write('CakeCookie[ThisThing]', 10);
```
Trying to use dots, `Cookie->write('CakeCookie.ThisThing', 10)` results in the cookie name being `CakeCookie`, which isn't usable either. (Edit - I'm pretty sure that's as intended, though. Just trying to see how I can get an array-style cookie name.) It seems that it's the `Hash` class:
```
#1
Hash::insert([], 'CakeCookie[ThisThing]', 'hello'); # -> Array ()
#2
Hash::insert([], 'what', 'hello'); # -> Array ( [what] => hello )
```
Should `#1` above be an empty array? I don't know what the special conditions `Hash::insert` are doing with brackets, but `noTokens` in that method seems to be messing this up.
Thanks
|
defect
|
cookies unable to contain brackets it s possible this is one of those things that are impossible to fix but right now i can t use cookiecomponent to write cookies that have brackets in the name in my x app there are cookies that are set with the array style cakecookie but in i can t just do this this cookie write cakecookie trying to use dots cookie write cakecookie thisthing results in the cookie name being cakecookie which isn t usable either edit i m pretty sure that s as intended though just trying to see how i can get an array style cookie name it seems that it s the hash class hash insert cakecookie hello array hash insert what hello array hello should above be an empty array i don t know what the special conditions hash insert are doing with brackets but notokens in that method seems to be messing this up thanks
| 1
|
7,731
| 2,610,434,760
|
IssuesEvent
|
2015-02-26 20:22:25
|
chrsmith/scribefire-chrome
|
https://api.github.com/repos/chrsmith/scribefire-chrome
|
opened
|
Cannot connect to Wordpress hosted blog
|
auto-migrated Priority-Medium Type-Defect
|
```
What's the problem?
I cannot get Scribefire to connect to a Wordpress hosted blog
http://neutronbytes.com
However, when I go to a commercially hosted blog http://blog.cdpug.org/ it
works perfectly.
What browser are you using?
Chrome
What Operating system are you using
Windows 7
What version of ScribeFire are you running?
Latest - installed from Google Chrome store 2/8/15
What Blog Type are you having this problem with? Please include version #
Wordpress hosted
if known or applicable
```
-----
Original issue reported on code.google.com by `djy...@gmail.com` on 9 Feb 2015 at 5:17
|
1.0
|
Cannot connect to Wordpress hosted blog - ```
What's the problem?
I cannot get Scribefire to connect to a Wordpress hosted blog
http://neutronbytes.com
However, when I go to a commercially hosted blog http://blog.cdpug.org/ it
works perfectly.
What browser are you using?
Chrome
What Operating system are you using
Windows 7
What version of ScribeFire are you running?
Latest - installed from Google Chrome store 2/8/15
What Blog Type are you having this problem with? Please include version #
Wordpress hosted
if known or applicable
```
-----
Original issue reported on code.google.com by `djy...@gmail.com` on 9 Feb 2015 at 5:17
|
defect
|
cannot connect to wordpress hosted blog what s the problem i cannot get scribefire to connect to a wordpress hosted blog however when i go to a commercially hosted blog it works perfectly what browser are you using chrome what operating system are you using windows what version of scribefire are you running latest installed from google chrome store what blog type are you having this problem with please include version wordpress hosted if known or applicable original issue reported on code google com by djy gmail com on feb at
| 1
|
652,065
| 21,520,489,284
|
IssuesEvent
|
2022-04-28 13:50:01
|
wp-media/wp-rocket
|
https://api.github.com/repos/wp-media/wp-rocket
|
closed
|
CDN exclusions are not reflected in Used CSS
|
type: bug module: CDN priority: medium effort: [XS] severity: major module: remove unused css
|
**Before submitting an issue please check that you’ve completed the following steps:**
- Made sure you’re on the latest version
- Used the search feature to ensure that the bug hasn’t been reported before
**Describe the bug**
When using CDN and RUCSS, CDN exclusions are not taken into the consideration.
**To Reproduce**
1. Enable CDN for all files
2. Enable RUCSS
3. Exclude URL from CDN (URL existing in used CSS) i.e /test.svg
4. visit the page and check used CSS
**Expected behavior**
Excluded URL not rewritten to the CNAME
**Additional**
Moving it from private repo:
https://github.com/wp-media/nodejs-treeshaker/issues/46
**Backlog Grooming (for WP Media dev team use only)**
- [ ] Reproduce the problem
- [ ] Identify the root cause
- [ ] Scope a solution
- [ ] Estimate the effort
|
1.0
|
CDN exclusions are not reflected in Used CSS - **Before submitting an issue please check that you’ve completed the following steps:**
- Made sure you’re on the latest version
- Used the search feature to ensure that the bug hasn’t been reported before
**Describe the bug**
When using CDN and RUCSS, CDN exclusions are not taken into the consideration.
**To Reproduce**
1. Enable CDN for all files
2. Enable RUCSS
3. Exclude URL from CDN (URL existing in used CSS) i.e /test.svg
4. visit the page and check used CSS
**Expected behavior**
Excluded URL not rewritten to the CNAME
**Additional**
Moving it from private repo:
https://github.com/wp-media/nodejs-treeshaker/issues/46
**Backlog Grooming (for WP Media dev team use only)**
- [ ] Reproduce the problem
- [ ] Identify the root cause
- [ ] Scope a solution
- [ ] Estimate the effort
|
non_defect
|
cdn exclusions are not reflected in used css before submitting an issue please check that you’ve completed the following steps made sure you’re on the latest version used the search feature to ensure that the bug hasn’t been reported before describe the bug when using cdn and rucss cdn exclusions are not taken into the consideration to reproduce enable cdn for all files enable rucss exclude url from cdn url existing in used css i e test svg visit the page and check used css expected behavior excluded url not rewritten to the cname additional moving it from private repo backlog grooming for wp media dev team use only reproduce the problem identify the root cause scope a solution estimate the effort
| 0
|
150
| 2,516,037,729
|
IssuesEvent
|
2015-01-15 22:47:42
|
cakephp/cakephp
|
https://api.github.com/repos/cakephp/cakephp
|
closed
|
Maximum nesting level when ExceptionRenderer throws exception
|
Defect
|
First, the simple test:
```php
class FaultyExceptionRenderer extends ExceptionRenderer {
public function render() {
throw new Exception('Error from renderer.');
}
}
/**
* Add to ErrorHandlerTest case.
* testExceptionRendererException method
*
* @return void
*/
public function testExceptionRendererException() {
if (file_exists(LOGS . 'error.log')) {
unlink(LOGS . 'error.log');
}
Configure::write('Exception.renderer', 'FaultyExceptionRenderer');
ErrorHandler::handleFatalError(E_USER_ERROR, 'Initial error', __FILE__ ,__LINE__);
}
```
Error: `Fatal Error Error: Maximum function nesting level of '100' reached, aborting!`.
I have not suggested a fix yet, because there are several ways of handling this. The goal is that the exception thrown from the ExceptionRenderer should be caught *once* and handled *once*.
Context:
```php
// ErrorHandler::handleException
try {
$error = new $renderer($exception);
$error->render();
} catch (Exception $e) {
set_error_handler(Configure::read('Error.handler')); // Should be using configured ErrorHandler
Configure::write('Error.trace', false); // trace is useless here since it's internal
$message = sprintf("[%s] %s\n%s", // Keeping same message format
get_class($e),
$e->getMessage(),
$e->getTraceAsString()
);
trigger_error($message, E_USER_ERROR);
```
The `trigger_error()` triggers `ErrorHandler::handleError`, which leads to `Error::handleFatalError`, and then back again to `ErrorHandler::handleException`, and the cycle continues.
|
1.0
|
Maximum nesting level when ExceptionRenderer throws exception - First, the simple test:
```php
class FaultyExceptionRenderer extends ExceptionRenderer {
public function render() {
throw new Exception('Error from renderer.');
}
}
/**
* Add to ErrorHandlerTest case.
* testExceptionRendererException method
*
* @return void
*/
public function testExceptionRendererException() {
if (file_exists(LOGS . 'error.log')) {
unlink(LOGS . 'error.log');
}
Configure::write('Exception.renderer', 'FaultyExceptionRenderer');
ErrorHandler::handleFatalError(E_USER_ERROR, 'Initial error', __FILE__ ,__LINE__);
}
```
Error: `Fatal Error Error: Maximum function nesting level of '100' reached, aborting!`.
I have not suggested a fix yet, because there are several ways of handling this. The goal is that the exception thrown from the ExceptionRenderer should be caught *once* and handled *once*.
Context:
```php
// ErrorHandler::handleException
try {
$error = new $renderer($exception);
$error->render();
} catch (Exception $e) {
set_error_handler(Configure::read('Error.handler')); // Should be using configured ErrorHandler
Configure::write('Error.trace', false); // trace is useless here since it's internal
$message = sprintf("[%s] %s\n%s", // Keeping same message format
get_class($e),
$e->getMessage(),
$e->getTraceAsString()
);
trigger_error($message, E_USER_ERROR);
```
The `trigger_error()` triggers `ErrorHandler::handleError`, which leads to `Error::handleFatalError`, and then back again to `ErrorHandler::handleException`, and the cycle continues.
|
defect
|
maximum nesting level when exceptionrenderer throws exception first the simple test php class faultyexceptionrenderer extends exceptionrenderer public function render throw new exception error from renderer add to errorhandlertest case testexceptionrendererexception method return void public function testexceptionrendererexception if file exists logs error log unlink logs error log configure write exception renderer faultyexceptionrenderer errorhandler handlefatalerror e user error initial error file line error fatal error error maximum function nesting level of reached aborting i have not suggested a fix yet because there are several ways of handling this the goal is that the exception thrown from the exceptionrenderer should be caught once and handled once context php errorhandler handleexception try error new renderer exception error render catch exception e set error handler configure read error handler should be using configured errorhandler configure write error trace false trace is useless here since it s internal message sprintf s n s keeping same message format get class e e getmessage e gettraceasstring trigger error message e user error the trigger error triggers errorhandler handleerror which leads to error handlefatalerror and then back again to errorhandler handleexception and the cycle continues
| 1
|
25,879
| 4,487,509,757
|
IssuesEvent
|
2016-08-30 01:27:04
|
schuel/hmmm
|
https://api.github.com/repos/schuel/hmmm
|
closed
|
show loading-icon for sub-templates
|
defect enhancement Layout
|
on multiple places it shows wrong message before loading content. This is quite confusing, specially with slow Internet connection when site changes again after 5-10 seconds.
### appearance
- [x] "There are no courses on this day" (/calendar)
- [x] "Relax, nothing happening today." (/frames/calendar)
- [x] events in a course are loaded later
- [x] groupnames are loaded with delay in coursList showing something like "removedGroup" first
- ...
can we implement a global loading icon?
|
1.0
|
show loading-icon for sub-templates - on multiple places it shows wrong message before loading content. This is quite confusing, specially with slow Internet connection when site changes again after 5-10 seconds.
### appearance
- [x] "There are no courses on this day" (/calendar)
- [x] "Relax, nothing happening today." (/frames/calendar)
- [x] events in a course are loaded later
- [x] groupnames are loaded with delay in coursList showing something like "removedGroup" first
- ...
can we implement a global loading icon?
|
defect
|
show loading icon for sub templates on multiple places it shows wrong message before loading content this is quite confusing specially with slow internet connection when site changes again after seconds appearance there are no courses on this day calendar relax nothing happening today frames calendar events in a course are loaded later groupnames are loaded with delay in courslist showing something like removedgroup first can we implement a global loading icon
| 1
|
281,871
| 21,315,444,540
|
IssuesEvent
|
2022-04-16 07:29:02
|
Kidsnd274/pe
|
https://api.github.com/repos/Kidsnd274/pe
|
opened
|
diagrams folder link incorrectly points to the old AB3 project in the developer guide
|
severity.Low type.DocumentationBug
|
diagrams folder link incorrectly points to the old AB3 project in the developer guide
The image below shows the incorrect link:

<!--session: 1650088649060-79fd5dc5-68c0-4a9a-8cfa-a788c21533ff-->
<!--Version: Web v3.4.2-->
|
1.0
|
diagrams folder link incorrectly points to the old AB3 project in the developer guide - diagrams folder link incorrectly points to the old AB3 project in the developer guide
The image below shows the incorrect link:

<!--session: 1650088649060-79fd5dc5-68c0-4a9a-8cfa-a788c21533ff-->
<!--Version: Web v3.4.2-->
|
non_defect
|
diagrams folder link incorrectly points to the old project in the developer guide diagrams folder link incorrectly points to the old project in the developer guide the image below shows the incorrect link
| 0
|
244,168
| 26,368,982,386
|
IssuesEvent
|
2023-01-11 18:56:53
|
dotnet/docs
|
https://api.github.com/repos/dotnet/docs
|
closed
|
Error "The parameter is incorrect" when decrypt file
|
support-request docs-experience Pri3 dotnet/prod dotnet-security/tech okr-health :pushpin: seQUESTered
|
I have included in my VB.NET project the System.Security.Cryptography to decript files, in my computer the functionality works OK but when I send the project to someone they are presented with the ERROR "The parameter is incorrect" and it shows the following Exception: 'System.Security.Cryptography.CryptographicException' in mscordlib.net
When the application try to decript the file in other computers it corrupted the file and it's unreadeble.
I don't know this is a error of my code or if this is a bug.
So I'm looking for some guidance to solve this problem



` Private Sub DecryptFile(ByVal inFile As String)
' Create instance of Aes for symmetric decryption of the data.
Dim aes As Aes = Aes.Create()
' Create byte arrays to get the length of the encrypted key and IV.
' These values were stored as 4 bytes each at the beginning of the encrypted package.
Dim LenK As Byte() = New Byte(4 - 1) {}
Dim LenIV As Byte() = New Byte(4 - 1) {}
' Construct the file name for the decrypted file.
Dim outFile As String = DecrFolder & (inFile.Substring(0, inFile.LastIndexOf(".")) & ".xlsx")
' Use FileStream objects to read the encrypted
' file (inFs) and save the decrypted file (outFs).
Using inFs As New FileStream((EncrFile & inFile), FileMode.Open)
inFs.Seek(0, SeekOrigin.Begin)
inFs.Read(LenK, 0, 3)
inFs.Seek(4, SeekOrigin.Begin)
inFs.Read(LenIV, 0, 3)
Dim lengthK As Integer = BitConverter.ToInt32(LenK, 0)
Dim lengthIV As Integer = BitConverter.ToInt32(LenIV, 0)
Dim startC As Integer = (lengthK + lengthIV + 8)
Dim lenC As Integer = (CType(inFs.Length, Integer) - startC)
Dim KeyEncrypted As Byte() = New Byte(lengthK - 1) {}
Dim IV As Byte() = New Byte(lengthIV - 1) {}
' Extract the key and IV starting from index 8
' after the length values.
inFs.Seek(8, SeekOrigin.Begin)
inFs.Read(KeyEncrypted, 0, lengthK)
inFs.Seek(8 + lengthK, SeekOrigin.Begin)
inFs.Read(IV, 0, lengthIV)
Directory.CreateDirectory(DecrFolder)
' User RSACryptoServiceProvider to decrypt the AES key
Dim KeyDecrypted As Byte() = _rsa.Decrypt(KeyEncrypted, False)
' Decrypt the key.
Dim transform As ICryptoTransform = aes.CreateDecryptor(KeyDecrypted, IV)
' Decrypt the cipher text from from the FileSteam of the encrypted
' file (inFs) into the FileStream for the decrypted file (outFs).
Using outFs As New FileStream(outFile, FileMode.Create)
Dim count As Integer = 0
Dim offset As Integer = 0
' blockSizeBytes can be any arbitrary size.
Dim blockSizeBytes As Integer = (aes.BlockSize / 8)
Dim data As Byte() = New Byte(blockSizeBytes - 1) {}
' By decrypting a chunk a time, you can save memory and accommodate large files.
' Start at the beginning of the cipher text.
inFs.Seek(startC, SeekOrigin.Begin)
Using outStreamDecrypted As New CryptoStream(outFs, transform, CryptoStreamMode.Write)
Do
count = inFs.Read(data, 0, blockSizeBytes)
offset += count
outStreamDecrypted.Write(data, 0, count)
Loop Until (count = 0)
outStreamDecrypted.FlushFinalBlock()
End Using
End Using
End Using
End Sub`
---
#### Document Details
⚠ *Do not edit this section. It is required for learn.microsoft.com ➟ GitHub issue linking.*
* ID: 2a8e441d-cd59-de15-a814-0d86d4cda052
* Version Independent ID: c94ca6d4-7e93-8aac-93d1-7f249c77d243
* Content: [Walkthrough: Create a Cryptographic Application](https://learn.microsoft.com/en-us/dotnet/standard/security/walkthrough-creating-a-cryptographic-application)
* Content Source: [docs/standard/security/walkthrough-creating-a-cryptographic-application.md](https://github.com/dotnet/docs/blob/main/docs/standard/security/walkthrough-creating-a-cryptographic-application.md)
* Product: **dotnet**
* Technology: **dotnet-security**
* GitHub Login: @IEvangelist
* Microsoft Alias: **dapine**
---
[Associated WorkItem - 58984](https://dev.azure.com/msft-skilling/Content/_workitems/edit/58984)
|
True
|
Error "The parameter is incorrect" when decrypt file - I have included in my VB.NET project the System.Security.Cryptography to decript files, in my computer the functionality works OK but when I send the project to someone they are presented with the ERROR "The parameter is incorrect" and it shows the following Exception: 'System.Security.Cryptography.CryptographicException' in mscordlib.net
When the application try to decript the file in other computers it corrupted the file and it's unreadeble.
I don't know this is a error of my code or if this is a bug.
So I'm looking for some guidance to solve this problem



` Private Sub DecryptFile(ByVal inFile As String)
' Create instance of Aes for symmetric decryption of the data.
Dim aes As Aes = Aes.Create()
' Create byte arrays to get the length of the encrypted key and IV.
' These values were stored as 4 bytes each at the beginning of the encrypted package.
Dim LenK As Byte() = New Byte(4 - 1) {}
Dim LenIV As Byte() = New Byte(4 - 1) {}
' Construct the file name for the decrypted file.
Dim outFile As String = DecrFolder & (inFile.Substring(0, inFile.LastIndexOf(".")) & ".xlsx")
' Use FileStream objects to read the encrypted
' file (inFs) and save the decrypted file (outFs).
Using inFs As New FileStream((EncrFile & inFile), FileMode.Open)
inFs.Seek(0, SeekOrigin.Begin)
inFs.Read(LenK, 0, 3)
inFs.Seek(4, SeekOrigin.Begin)
inFs.Read(LenIV, 0, 3)
Dim lengthK As Integer = BitConverter.ToInt32(LenK, 0)
Dim lengthIV As Integer = BitConverter.ToInt32(LenIV, 0)
Dim startC As Integer = (lengthK + lengthIV + 8)
Dim lenC As Integer = (CType(inFs.Length, Integer) - startC)
Dim KeyEncrypted As Byte() = New Byte(lengthK - 1) {}
Dim IV As Byte() = New Byte(lengthIV - 1) {}
' Extract the key and IV starting from index 8
' after the length values.
inFs.Seek(8, SeekOrigin.Begin)
inFs.Read(KeyEncrypted, 0, lengthK)
inFs.Seek(8 + lengthK, SeekOrigin.Begin)
inFs.Read(IV, 0, lengthIV)
Directory.CreateDirectory(DecrFolder)
' User RSACryptoServiceProvider to decrypt the AES key
Dim KeyDecrypted As Byte() = _rsa.Decrypt(KeyEncrypted, False)
' Decrypt the key.
Dim transform As ICryptoTransform = aes.CreateDecryptor(KeyDecrypted, IV)
' Decrypt the cipher text from from the FileSteam of the encrypted
' file (inFs) into the FileStream for the decrypted file (outFs).
Using outFs As New FileStream(outFile, FileMode.Create)
Dim count As Integer = 0
Dim offset As Integer = 0
' blockSizeBytes can be any arbitrary size.
Dim blockSizeBytes As Integer = (aes.BlockSize / 8)
Dim data As Byte() = New Byte(blockSizeBytes - 1) {}
' By decrypting a chunk a time, you can save memory and accommodate large files.
' Start at the beginning of the cipher text.
inFs.Seek(startC, SeekOrigin.Begin)
Using outStreamDecrypted As New CryptoStream(outFs, transform, CryptoStreamMode.Write)
Do
count = inFs.Read(data, 0, blockSizeBytes)
offset += count
outStreamDecrypted.Write(data, 0, count)
Loop Until (count = 0)
outStreamDecrypted.FlushFinalBlock()
End Using
End Using
End Using
End Sub`
---
#### Document Details
⚠ *Do not edit this section. It is required for learn.microsoft.com ➟ GitHub issue linking.*
* ID: 2a8e441d-cd59-de15-a814-0d86d4cda052
* Version Independent ID: c94ca6d4-7e93-8aac-93d1-7f249c77d243
* Content: [Walkthrough: Create a Cryptographic Application](https://learn.microsoft.com/en-us/dotnet/standard/security/walkthrough-creating-a-cryptographic-application)
* Content Source: [docs/standard/security/walkthrough-creating-a-cryptographic-application.md](https://github.com/dotnet/docs/blob/main/docs/standard/security/walkthrough-creating-a-cryptographic-application.md)
* Product: **dotnet**
* Technology: **dotnet-security**
* GitHub Login: @IEvangelist
* Microsoft Alias: **dapine**
---
[Associated WorkItem - 58984](https://dev.azure.com/msft-skilling/Content/_workitems/edit/58984)
|
non_defect
|
error the parameter is incorrect when decrypt file i have included in my vb net project the system security cryptography to decript files in my computer the functionality works ok but when i send the project to someone they are presented with the error the parameter is incorrect and it shows the following exception system security cryptography cryptographicexception in mscordlib net when the application try to decript the file in other computers it corrupted the file and it s unreadeble i don t know this is a error of my code or if this is a bug so i m looking for some guidance to solve this problem private sub decryptfile byval infile as string create instance of aes for symmetric decryption of the data dim aes as aes aes create create byte arrays to get the length of the encrypted key and iv these values were stored as bytes each at the beginning of the encrypted package dim lenk as byte new byte dim leniv as byte new byte construct the file name for the decrypted file dim outfile as string decrfolder infile substring infile lastindexof xlsx use filestream objects to read the encrypted file infs and save the decrypted file outfs using infs as new filestream encrfile infile filemode open infs seek seekorigin begin infs read lenk infs seek seekorigin begin infs read leniv dim lengthk as integer bitconverter lenk dim lengthiv as integer bitconverter leniv dim startc as integer lengthk lengthiv dim lenc as integer ctype infs length integer startc dim keyencrypted as byte new byte lengthk dim iv as byte new byte lengthiv extract the key and iv starting from index after the length values infs seek seekorigin begin infs read keyencrypted lengthk infs seek lengthk seekorigin begin infs read iv lengthiv directory createdirectory decrfolder user rsacryptoserviceprovider to decrypt the aes key dim keydecrypted as byte rsa decrypt keyencrypted false decrypt the key dim transform as icryptotransform aes createdecryptor keydecrypted iv decrypt the cipher text from from the filesteam of the encrypted file infs into the filestream for the decrypted file outfs using outfs as new filestream outfile filemode create dim count as integer dim offset as integer blocksizebytes can be any arbitrary size dim blocksizebytes as integer aes blocksize dim data as byte new byte blocksizebytes by decrypting a chunk a time you can save memory and accommodate large files start at the beginning of the cipher text infs seek startc seekorigin begin using outstreamdecrypted as new cryptostream outfs transform cryptostreammode write do count infs read data blocksizebytes offset count outstreamdecrypted write data count loop until count outstreamdecrypted flushfinalblock end using end using end using end sub document details ⚠ do not edit this section it is required for learn microsoft com ➟ github issue linking id version independent id content content source product dotnet technology dotnet security github login ievangelist microsoft alias dapine
| 0
|
7,228
| 2,610,358,948
|
IssuesEvent
|
2015-02-26 19:56:12
|
chrsmith/scribefire-chrome
|
https://api.github.com/repos/chrsmith/scribefire-chrome
|
closed
|
ScribeFire not connecting to Tumblr
|
auto-migrated Priority-Medium tumblr Type-Defect
|
```
What's the problem?
ScribeFire Not connecting with Tumblr URL
What browser are you using?
Chrome
What version of ScribeFire are you running?
the latest 4.0. I downloaded it from Chrome last month
```
-----
Original issue reported on code.google.com by `kalimar...@gmail.com` on 16 Dec 2011 at 4:59
* Merged into: #766
|
1.0
|
ScribeFire not connecting to Tumblr - ```
What's the problem?
ScribeFire Not connecting with Tumblr URL
What browser are you using?
Chrome
What version of ScribeFire are you running?
the latest 4.0. I downloaded it from Chrome last month
```
-----
Original issue reported on code.google.com by `kalimar...@gmail.com` on 16 Dec 2011 at 4:59
* Merged into: #766
|
defect
|
scribefire not connecting to tumblr what s the problem scribefire not connecting with tumblr url what browser are you using chrome what version of scribefire are you running the latest i downloaded it from chrome last month original issue reported on code google com by kalimar gmail com on dec at merged into
| 1
|
269,614
| 28,960,230,024
|
IssuesEvent
|
2023-05-10 01:25:14
|
dpteam/RK3188_TABLET
|
https://api.github.com/repos/dpteam/RK3188_TABLET
|
reopened
|
CVE-2021-28972 (Medium) detected in linuxv3.0
|
Mend: dependency security vulnerability
|
## CVE-2021-28972 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linuxv3.0</b></p></summary>
<p>
<p>Linux kernel source tree</p>
<p>Library home page: <a href=https://github.com/verygreen/linux.git>https://github.com/verygreen/linux.git</a></p>
<p>Found in HEAD commit: <a href="https://github.com/dpteam/RK3188_TABLET/commit/0c501f5a0fd72c7b2ac82904235363bd44fd8f9e">0c501f5a0fd72c7b2ac82904235363bd44fd8f9e</a></p>
<p>Found in base branch: <b>master</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (3)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/drivers/pci/hotplug/rpadlpar_sysfs.c</b>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/drivers/pci/hotplug/rpadlpar_sysfs.c</b>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/drivers/pci/hotplug/rpadlpar_sysfs.c</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png?' width=19 height=20> Vulnerability Details</summary>
<p>
In drivers/pci/hotplug/rpadlpar_sysfs.c in the Linux kernel through 5.11.8, the RPA PCI Hotplug driver has a user-tolerable buffer overflow when writing a new device name to the driver from userspace, allowing userspace to write data to the kernel stack frame directly. This occurs because add_slot_store and remove_slot_store mishandle drc_name '\0' termination, aka CID-cc7a0bb058b8.
<p>Publish Date: 2021-03-22
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2021-28972>CVE-2021-28972</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.7</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: High
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-28972">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-28972</a></p>
<p>Release Date: 2021-03-22</p>
<p>Fix Resolution: v4.4.263, v4.9.263, v4.14.227, v4.19.183, v5.4.108, v5.10.26, v5.11.9, v5.12-rc4</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2021-28972 (Medium) detected in linuxv3.0 - ## CVE-2021-28972 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linuxv3.0</b></p></summary>
<p>
<p>Linux kernel source tree</p>
<p>Library home page: <a href=https://github.com/verygreen/linux.git>https://github.com/verygreen/linux.git</a></p>
<p>Found in HEAD commit: <a href="https://github.com/dpteam/RK3188_TABLET/commit/0c501f5a0fd72c7b2ac82904235363bd44fd8f9e">0c501f5a0fd72c7b2ac82904235363bd44fd8f9e</a></p>
<p>Found in base branch: <b>master</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (3)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/drivers/pci/hotplug/rpadlpar_sysfs.c</b>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/drivers/pci/hotplug/rpadlpar_sysfs.c</b>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/drivers/pci/hotplug/rpadlpar_sysfs.c</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png?' width=19 height=20> Vulnerability Details</summary>
<p>
In drivers/pci/hotplug/rpadlpar_sysfs.c in the Linux kernel through 5.11.8, the RPA PCI Hotplug driver has a user-tolerable buffer overflow when writing a new device name to the driver from userspace, allowing userspace to write data to the kernel stack frame directly. This occurs because add_slot_store and remove_slot_store mishandle drc_name '\0' termination, aka CID-cc7a0bb058b8.
<p>Publish Date: 2021-03-22
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2021-28972>CVE-2021-28972</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.7</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: High
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-28972">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-28972</a></p>
<p>Release Date: 2021-03-22</p>
<p>Fix Resolution: v4.4.263, v4.9.263, v4.14.227, v4.19.183, v5.4.108, v5.10.26, v5.11.9, v5.12-rc4</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_defect
|
cve medium detected in cve medium severity vulnerability vulnerable library linux kernel source tree library home page a href found in head commit a href found in base branch master vulnerable source files drivers pci hotplug rpadlpar sysfs c drivers pci hotplug rpadlpar sysfs c drivers pci hotplug rpadlpar sysfs c vulnerability details in drivers pci hotplug rpadlpar sysfs c in the linux kernel through the rpa pci hotplug driver has a user tolerable buffer overflow when writing a new device name to the driver from userspace allowing userspace to write data to the kernel stack frame directly this occurs because add slot store and remove slot store mishandle drc name termination aka cid publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required high user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with mend
| 0
|
175,237
| 21,300,891,134
|
IssuesEvent
|
2022-04-15 02:50:44
|
ncorejava/moment
|
https://api.github.com/repos/ncorejava/moment
|
opened
|
CVE-2021-43138 (High) detected in async-2.6.3.tgz, async-1.5.2.tgz
|
security vulnerability
|
## CVE-2021-43138 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>async-2.6.3.tgz</b>, <b>async-1.5.2.tgz</b></p></summary>
<p>
<details><summary><b>async-2.6.3.tgz</b></p></summary>
<p>Higher-order functions and common patterns for asynchronous code</p>
<p>Library home page: <a href="https://registry.npmjs.org/async/-/async-2.6.3.tgz">https://registry.npmjs.org/async/-/async-2.6.3.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/grunt-contrib-clean/node_modules/async/package.json,/node_modules/grunt-contrib-watch/node_modules/async/package.json,/node_modules/sauce-connect-launcher/node_modules/async/package.json,/node_modules/grunt-string-replace/node_modules/async/package.json</p>
<p>
Dependency Hierarchy:
- grunt-contrib-watch-1.1.0.tgz (Root Library)
- :x: **async-2.6.3.tgz** (Vulnerable Library)
</details>
<details><summary><b>async-1.5.2.tgz</b></p></summary>
<p>Higher-order functions and common patterns for asynchronous code</p>
<p>Library home page: <a href="https://registry.npmjs.org/async/-/async-1.5.2.tgz">https://registry.npmjs.org/async/-/async-1.5.2.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/async/package.json</p>
<p>
Dependency Hierarchy:
- grunt-1.3.0.tgz (Root Library)
- grunt-legacy-util-2.0.0.tgz
- :x: **async-1.5.2.tgz** (Vulnerable Library)
</details>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
A vulnerability exists in Async through 3.2.1 (fixed in 3.2.2) , which could let a malicious user obtain privileges via the mapValues() method.
<p>Publish Date: 2022-04-06
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-43138>CVE-2021-43138</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2021-43138">https://nvd.nist.gov/vuln/detail/CVE-2021-43138</a></p>
<p>Release Date: 2022-04-06</p>
<p>Fix Resolution (async): 3.2.2</p>
<p>Direct dependency fix Resolution (grunt): 1.4.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2021-43138 (High) detected in async-2.6.3.tgz, async-1.5.2.tgz - ## CVE-2021-43138 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>async-2.6.3.tgz</b>, <b>async-1.5.2.tgz</b></p></summary>
<p>
<details><summary><b>async-2.6.3.tgz</b></p></summary>
<p>Higher-order functions and common patterns for asynchronous code</p>
<p>Library home page: <a href="https://registry.npmjs.org/async/-/async-2.6.3.tgz">https://registry.npmjs.org/async/-/async-2.6.3.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/grunt-contrib-clean/node_modules/async/package.json,/node_modules/grunt-contrib-watch/node_modules/async/package.json,/node_modules/sauce-connect-launcher/node_modules/async/package.json,/node_modules/grunt-string-replace/node_modules/async/package.json</p>
<p>
Dependency Hierarchy:
- grunt-contrib-watch-1.1.0.tgz (Root Library)
- :x: **async-2.6.3.tgz** (Vulnerable Library)
</details>
<details><summary><b>async-1.5.2.tgz</b></p></summary>
<p>Higher-order functions and common patterns for asynchronous code</p>
<p>Library home page: <a href="https://registry.npmjs.org/async/-/async-1.5.2.tgz">https://registry.npmjs.org/async/-/async-1.5.2.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/async/package.json</p>
<p>
Dependency Hierarchy:
- grunt-1.3.0.tgz (Root Library)
- grunt-legacy-util-2.0.0.tgz
- :x: **async-1.5.2.tgz** (Vulnerable Library)
</details>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
A vulnerability exists in Async through 3.2.1 (fixed in 3.2.2) , which could let a malicious user obtain privileges via the mapValues() method.
<p>Publish Date: 2022-04-06
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-43138>CVE-2021-43138</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2021-43138">https://nvd.nist.gov/vuln/detail/CVE-2021-43138</a></p>
<p>Release Date: 2022-04-06</p>
<p>Fix Resolution (async): 3.2.2</p>
<p>Direct dependency fix Resolution (grunt): 1.4.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_defect
|
cve high detected in async tgz async tgz cve high severity vulnerability vulnerable libraries async tgz async tgz async tgz higher order functions and common patterns for asynchronous code library home page a href path to dependency file package json path to vulnerable library node modules grunt contrib clean node modules async package json node modules grunt contrib watch node modules async package json node modules sauce connect launcher node modules async package json node modules grunt string replace node modules async package json dependency hierarchy grunt contrib watch tgz root library x async tgz vulnerable library async tgz higher order functions and common patterns for asynchronous code library home page a href path to dependency file package json path to vulnerable library node modules async package json dependency hierarchy grunt tgz root library grunt legacy util tgz x async tgz vulnerable library found in base branch master vulnerability details a vulnerability exists in async through fixed in which could let a malicious user obtain privileges via the mapvalues method publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required none user interaction required scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution async direct dependency fix resolution grunt step up your open source security game with whitesource
| 0
|
541,475
| 15,827,497,525
|
IssuesEvent
|
2021-04-06 08:46:01
|
ansible-collections/azure
|
https://api.github.com/repos/ansible-collections/azure
|
closed
|
Issue with azure_rm_dnsrecordset: 404 Client Error: Not Found for url
|
medium_priority work in
|
##### SUMMARY
Hi!
I get the following Issue
msrestazure.azure_exceptions.CloudError: 404 Client Error: Not Found for url: https://management.azure.com/subscriptions/e23cf4ad-2d71-446f-9864-82961ad66ae7/resourceGroups/mv-dns-rg/providers/Microsoft.Network/dnsZones/?api-version=2018-05-01\n",
my task:
---
- name: Setze A-Record in Azure
become: false
hosts: localhost
tasks:
- name: A-Record erstellen
azure_rm_dnsrecordset:
resource_group: mv-dns-rg
relative_name: "{{ inventory_hostname }}"
zone_name: "{{ ansible_facts.domain }}"
record_type: A
records:
- entry: "{{ publicipaddresse }}"
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
azure_rm_dnsrecordset
##### ANSIBLE VERSION
AWX 17.0.1
azure.azcollection:1.5.0
##### CONFIGURATION
nothing
##### OS / ENVIRONMENT
nothing
##### STEPS TO REPRODUCE
add azure credentials in AWX
run playbook/template
get error
(i tried re-creating the service principal - the same worked a week or two ago)
##### EXPECTED RESULTS
creating A record on azure DNS zone
##### ACTUAL RESULTS
receive error:
msrestazure.azure_exceptions.CloudError: 404 Client Error: Not Found for url: https://management.azure.com/subscriptions/e23cf4ad-2d71-446f-9864-82961ad66ae7/resourceGroups/mv-dns-rg/providers/Microsoft.Network/dnsZones/?api-version=2018-05-01\n",
|
1.0
|
Issue with azure_rm_dnsrecordset: 404 Client Error: Not Found for url - ##### SUMMARY
Hi!
I get the following Issue
msrestazure.azure_exceptions.CloudError: 404 Client Error: Not Found for url: https://management.azure.com/subscriptions/e23cf4ad-2d71-446f-9864-82961ad66ae7/resourceGroups/mv-dns-rg/providers/Microsoft.Network/dnsZones/?api-version=2018-05-01\n",
my task:
---
- name: Setze A-Record in Azure
become: false
hosts: localhost
tasks:
- name: A-Record erstellen
azure_rm_dnsrecordset:
resource_group: mv-dns-rg
relative_name: "{{ inventory_hostname }}"
zone_name: "{{ ansible_facts.domain }}"
record_type: A
records:
- entry: "{{ publicipaddresse }}"
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
azure_rm_dnsrecordset
##### ANSIBLE VERSION
AWX 17.0.1
azure.azcollection:1.5.0
##### CONFIGURATION
nothing
##### OS / ENVIRONMENT
nothing
##### STEPS TO REPRODUCE
add azure credentials in AWX
run playbook/template
get error
(i tried re-creating the service principal - the same worked a week or two ago)
##### EXPECTED RESULTS
creating A record on azure DNS zone
##### ACTUAL RESULTS
receive error:
msrestazure.azure_exceptions.CloudError: 404 Client Error: Not Found for url: https://management.azure.com/subscriptions/e23cf4ad-2d71-446f-9864-82961ad66ae7/resourceGroups/mv-dns-rg/providers/Microsoft.Network/dnsZones/?api-version=2018-05-01\n",
|
non_defect
|
issue with azure rm dnsrecordset client error not found for url summary hi i get the following issue msrestazure azure exceptions clouderror client error not found for url my task name setze a record in azure become false hosts localhost tasks name a record erstellen azure rm dnsrecordset resource group mv dns rg relative name inventory hostname zone name ansible facts domain record type a records entry publicipaddresse issue type bug report component name azure rm dnsrecordset ansible version awx azure azcollection configuration nothing os environment nothing steps to reproduce add azure credentials in awx run playbook template get error i tried re creating the service principal the same worked a week or two ago expected results creating a record on azure dns zone actual results receive error msrestazure azure exceptions clouderror client error not found for url
| 0
|
47,964
| 2,990,053,714
|
IssuesEvent
|
2015-07-21 06:26:03
|
jayway/rest-assured
|
https://api.github.com/repos/jayway/rest-assured
|
closed
|
JSONPath fails to escape URL 127.0.0.1:8080
|
bug imported invalid Priority-Medium
|
_From [JanNiko...@googlemail.com](https://code.google.com/u/102041242540857970093/) on August 22, 2012 16:30:47_
Here is a unittest to reproduce the problem:
@Test
public void testDotEscapingWithUrlKey() {
JsonPath path = new JsonPath("{ \"http://127.0.0.1:8080/key\ : \"value\" }");
assertEquals("value", path.get("http://127.0.0.1:8080/key"));
}
I am using version 1.6.2
Might be related to this fixed issue: https://code.google.com/p/rest-assured/issues/detail?id=172
_Original issue: http://code.google.com/p/rest-assured/issues/detail?id=190_
|
1.0
|
JSONPath fails to escape URL 127.0.0.1:8080 - _From [JanNiko...@googlemail.com](https://code.google.com/u/102041242540857970093/) on August 22, 2012 16:30:47_
Here is a unittest to reproduce the problem:
@Test
public void testDotEscapingWithUrlKey() {
JsonPath path = new JsonPath("{ \"http://127.0.0.1:8080/key\ : \"value\" }");
assertEquals("value", path.get("http://127.0.0.1:8080/key"));
}
I am using version 1.6.2
Might be related to this fixed issue: https://code.google.com/p/rest-assured/issues/detail?id=172
_Original issue: http://code.google.com/p/rest-assured/issues/detail?id=190_
|
non_defect
|
jsonpath fails to escape url from on august here is a unittest to reproduce the problem test public void testdotescapingwithurlkey jsonpath path new jsonpath value assertequals value path get i am using version might be related to this fixed issue original issue
| 0
|
201,777
| 23,039,644,537
|
IssuesEvent
|
2022-07-23 01:07:02
|
turkdevops/icu
|
https://api.github.com/repos/turkdevops/icu
|
opened
|
tzinfo-1.2.9.gem: 1 vulnerabilities (highest severity is: 7.5)
|
security vulnerability
|
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>tzinfo-1.2.9.gem</b></p></summary>
<p>TZInfo provides daylight savings aware transformations between times in different time zones.</p>
<p>Library home page: <a href="https://rubygems.org/gems/tzinfo-1.2.9.gem">https://rubygems.org/gems/tzinfo-1.2.9.gem</a></p>
<p>Path to dependency file: /docs/Gemfile.lock</p>
<p>Path to vulnerable library: /ms/2.5.0/cache/tzinfo-1.2.9.gem</p>
<p>
</details>
## Vulnerabilities
| CVE | Severity | <img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS | Dependency | Type | Fixed in | Remediation Available |
| ------------- | ------------- | ----- | ----- | ----- | --- | --- |
| [CVE-2022-31163](https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-31163) | <img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> High | 7.5 | tzinfo-1.2.9.gem | Direct | tzinfo - 0.3.61,1.2.10 | ❌ |
## Details
<details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> CVE-2022-31163</summary>
### Vulnerable Library - <b>tzinfo-1.2.9.gem</b></p>
<p>TZInfo provides daylight savings aware transformations between times in different time zones.</p>
<p>Library home page: <a href="https://rubygems.org/gems/tzinfo-1.2.9.gem">https://rubygems.org/gems/tzinfo-1.2.9.gem</a></p>
<p>Path to dependency file: /docs/Gemfile.lock</p>
<p>Path to vulnerable library: /ms/2.5.0/cache/tzinfo-1.2.9.gem</p>
<p>
Dependency Hierarchy:
- :x: **tzinfo-1.2.9.gem** (Vulnerable Library)
<p>Found in base branch: <b>gh-pages</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
TZInfo is a Ruby library that provides access to time zone data and allows times to be converted using time zone rules. Versions prior to 0.36.1, as well as those prior to 1.2.10 when used with the Ruby data source tzinfo-data, are vulnerable to relative path traversal. With the Ruby data source, time zones are defined in Ruby files. There is one file per time zone. Time zone files are loaded with `require` on demand. In the affected versions, `TZInfo::Timezone.get` fails to validate time zone identifiers correctly, allowing a new line character within the identifier. With Ruby version 1.9.3 and later, `TZInfo::Timezone.get` can be made to load unintended files with `require`, executing them within the Ruby process. Versions 0.3.61 and 1.2.10 include fixes to correctly validate time zone identifiers. Versions 2.0.0 and later are not vulnerable. Version 0.3.61 can still load arbitrary files from the Ruby load path if their name follows the rules for a valid time zone identifier and the file has a prefix of `tzinfo/definition` within a directory in the load path. Applications should ensure that untrusted files are not placed in a directory on the load path. As a workaround, the time zone identifier can be validated before passing to `TZInfo::Timezone.get` by ensuring it matches the regular expression `\A[A-Za-z0-9+\-_]+(?:\/[A-Za-z0-9+\-_]+)*\z`.
<p>Publish Date: 2022-07-22
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-31163>CVE-2022-31163</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>7.5</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/tzinfo/tzinfo/security/advisories/GHSA-5cm2-9h8c-rvfx">https://github.com/tzinfo/tzinfo/security/advisories/GHSA-5cm2-9h8c-rvfx</a></p>
<p>Release Date: 2022-07-22</p>
<p>Fix Resolution: tzinfo - 0.3.61,1.2.10</p>
</p>
<p></p>
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
</details>
|
True
|
tzinfo-1.2.9.gem: 1 vulnerabilities (highest severity is: 7.5) - <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>tzinfo-1.2.9.gem</b></p></summary>
<p>TZInfo provides daylight savings aware transformations between times in different time zones.</p>
<p>Library home page: <a href="https://rubygems.org/gems/tzinfo-1.2.9.gem">https://rubygems.org/gems/tzinfo-1.2.9.gem</a></p>
<p>Path to dependency file: /docs/Gemfile.lock</p>
<p>Path to vulnerable library: /ms/2.5.0/cache/tzinfo-1.2.9.gem</p>
<p>
</details>
## Vulnerabilities
| CVE | Severity | <img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS | Dependency | Type | Fixed in | Remediation Available |
| ------------- | ------------- | ----- | ----- | ----- | --- | --- |
| [CVE-2022-31163](https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-31163) | <img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> High | 7.5 | tzinfo-1.2.9.gem | Direct | tzinfo - 0.3.61,1.2.10 | ❌ |
## Details
<details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> CVE-2022-31163</summary>
### Vulnerable Library - <b>tzinfo-1.2.9.gem</b></p>
<p>TZInfo provides daylight savings aware transformations between times in different time zones.</p>
<p>Library home page: <a href="https://rubygems.org/gems/tzinfo-1.2.9.gem">https://rubygems.org/gems/tzinfo-1.2.9.gem</a></p>
<p>Path to dependency file: /docs/Gemfile.lock</p>
<p>Path to vulnerable library: /ms/2.5.0/cache/tzinfo-1.2.9.gem</p>
<p>
Dependency Hierarchy:
- :x: **tzinfo-1.2.9.gem** (Vulnerable Library)
<p>Found in base branch: <b>gh-pages</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
TZInfo is a Ruby library that provides access to time zone data and allows times to be converted using time zone rules. Versions prior to 0.36.1, as well as those prior to 1.2.10 when used with the Ruby data source tzinfo-data, are vulnerable to relative path traversal. With the Ruby data source, time zones are defined in Ruby files. There is one file per time zone. Time zone files are loaded with `require` on demand. In the affected versions, `TZInfo::Timezone.get` fails to validate time zone identifiers correctly, allowing a new line character within the identifier. With Ruby version 1.9.3 and later, `TZInfo::Timezone.get` can be made to load unintended files with `require`, executing them within the Ruby process. Versions 0.3.61 and 1.2.10 include fixes to correctly validate time zone identifiers. Versions 2.0.0 and later are not vulnerable. Version 0.3.61 can still load arbitrary files from the Ruby load path if their name follows the rules for a valid time zone identifier and the file has a prefix of `tzinfo/definition` within a directory in the load path. Applications should ensure that untrusted files are not placed in a directory on the load path. As a workaround, the time zone identifier can be validated before passing to `TZInfo::Timezone.get` by ensuring it matches the regular expression `\A[A-Za-z0-9+\-_]+(?:\/[A-Za-z0-9+\-_]+)*\z`.
<p>Publish Date: 2022-07-22
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-31163>CVE-2022-31163</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>7.5</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/tzinfo/tzinfo/security/advisories/GHSA-5cm2-9h8c-rvfx">https://github.com/tzinfo/tzinfo/security/advisories/GHSA-5cm2-9h8c-rvfx</a></p>
<p>Release Date: 2022-07-22</p>
<p>Fix Resolution: tzinfo - 0.3.61,1.2.10</p>
</p>
<p></p>
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
</details>
|
non_defect
|
tzinfo gem vulnerabilities highest severity is vulnerable library tzinfo gem tzinfo provides daylight savings aware transformations between times in different time zones library home page a href path to dependency file docs gemfile lock path to vulnerable library ms cache tzinfo gem vulnerabilities cve severity cvss dependency type fixed in remediation available high tzinfo gem direct tzinfo details cve vulnerable library tzinfo gem tzinfo provides daylight savings aware transformations between times in different time zones library home page a href path to dependency file docs gemfile lock path to vulnerable library ms cache tzinfo gem dependency hierarchy x tzinfo gem vulnerable library found in base branch gh pages vulnerability details tzinfo is a ruby library that provides access to time zone data and allows times to be converted using time zone rules versions prior to as well as those prior to when used with the ruby data source tzinfo data are vulnerable to relative path traversal with the ruby data source time zones are defined in ruby files there is one file per time zone time zone files are loaded with require on demand in the affected versions tzinfo timezone get fails to validate time zone identifiers correctly allowing a new line character within the identifier with ruby version and later tzinfo timezone get can be made to load unintended files with require executing them within the ruby process versions and include fixes to correctly validate time zone identifiers versions and later are not vulnerable version can still load arbitrary files from the ruby load path if their name follows the rules for a valid time zone identifier and the file has a prefix of tzinfo definition within a directory in the load path applications should ensure that untrusted files are not placed in a directory on the load path as a workaround the time zone identifier can be validated before passing to tzinfo timezone get by ensuring it matches the regular expression a z publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity high privileges required none user interaction required scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution tzinfo step up your open source security game with mend
| 0
|
57,479
| 15,801,474,529
|
IssuesEvent
|
2021-04-03 05:05:29
|
openzfs/zfs
|
https://api.github.com/repos/openzfs/zfs
|
closed
|
stuck in futex
|
Status: Triage Needed Type: Defect
|
I haven't heard back about how to debug the previous issue (11641), so here goes anyway.
Postgres autovacuum worker process stuck in futex. I have seen this ~~twice~~ 3 times before on another server.
The process does not die with SIGINT, and I imagine if I SIGKILL it, it won't die, and I'll need to reboot the server.
[pryzbyj@ts-db-new ~]$ ps -O lstart,wchan=wwwwwwwwwwwwwwwwwwww 12583
PID STARTED wwwwwwwwwwwwwwwwwwww S TTY TIME COMMAND
12583 Wed Mar 3 03:41:23 2021 futex_wait_queue_me S ? 00:00:59 postgres: autovacuum worker ....
Distribution Name | Centos
Distribution Version | 7.8
Linux Kernel | kernel-3.10.0-1127.18.2.el7 and 3.10.0-1160.15.2.el7
Architecture | x86_64
ZFS Version | 2.0.1-1 and 2.0.3-1
SPL Version | 2.0.1-1
|
1.0
|
stuck in futex - I haven't heard back about how to debug the previous issue (11641), so here goes anyway.
Postgres autovacuum worker process stuck in futex. I have seen this ~~twice~~ 3 times before on another server.
The process does not die with SIGINT, and I imagine if I SIGKILL it, it won't die, and I'll need to reboot the server.
[pryzbyj@ts-db-new ~]$ ps -O lstart,wchan=wwwwwwwwwwwwwwwwwwww 12583
PID STARTED wwwwwwwwwwwwwwwwwwww S TTY TIME COMMAND
12583 Wed Mar 3 03:41:23 2021 futex_wait_queue_me S ? 00:00:59 postgres: autovacuum worker ....
Distribution Name | Centos
Distribution Version | 7.8
Linux Kernel | kernel-3.10.0-1127.18.2.el7 and 3.10.0-1160.15.2.el7
Architecture | x86_64
ZFS Version | 2.0.1-1 and 2.0.3-1
SPL Version | 2.0.1-1
|
defect
|
stuck in futex i haven t heard back about how to debug the previous issue so here goes anyway postgres autovacuum worker process stuck in futex i have seen this twice times before on another server the process does not die with sigint and i imagine if i sigkill it it won t die and i ll need to reboot the server ps o lstart wchan wwwwwwwwwwwwwwwwwwww pid started wwwwwwwwwwwwwwwwwwww s tty time command wed mar futex wait queue me s postgres autovacuum worker distribution name centos distribution version linux kernel kernel and architecture zfs version and spl version
| 1
|
62,696
| 17,148,273,278
|
IssuesEvent
|
2021-07-13 17:01:02
|
vector-im/element-web
|
https://api.github.com/repos/vector-im/element-web
|
closed
|
Outgoing message is corrupted in local cache; perpetually displayed as newest message
|
T-Defect Z-Synapse
|
### Description
This outgoing message is perpetually displayed as the newest message in this room for me, from my desktop Element client:

"View source" gives:
```
{
"type": "m.room.message",
"content": {
"msgtype": "m.text",
"body": "but on the other side it's not actually much entropy once you know this person thinks \"nerdy books\" are good encryption keys"
},
"event_id": "$vzIp4ZPdxzFzmpNIOsAXV7reeAhn0cTDLGahXamggec",
"user_id": "@sporksmith:matrix.org",
"sender": "@sporksmith:matrix.org",
"room_id": "!HRkvwgoHhxxegkVaQY:matrix.org",
"origin_server_ts": 1621294330488
}
```
This seems to be missing some fields? Maybe notably "age"? For comparison, after a bit I thought maybe the message hadn't gone out, and re-sent the same text. "View source" for that one gives:
```
{
"content": {
"body": "but on the other side it's not actually much entropy once you know this person thinks \"nerdy books\" are good encryption keys",
"msgtype": "m.text"
},
"origin_server_ts": 1621297232938,
"room_id": "!HRkvwgoHhxxegkVaQY:matrix.org",
"sender": "@sporksmith:matrix.org",
"type": "m.room.message",
"unsigned": {
"age": 747163236
},
"event_id": "$9Vz-zkzpdXbIBcO5quuhKgsjPMpRVV_3NweNVNlHX1A",
"user_id": "@sporksmith:matrix.org",
"age": 747163236
}
```
The original message appears normally from my other clients (e.g. Element on mobile), and according to others did go out the first time and looked normal, so I think it's somehow corrupted in this client's local cache.
### Steps to reproduce
Haven't been able to reproduce
- Send a message (in an IRC bridged room)
- ??
- Observe outgoing message perpetually displayed as newest in the room
Describe how what happens differs from what you expected:
Logs being sent: yes
### Version information
<!-- IMPORTANT: please answer the following questions, to help us narrow down the problem -->
- **Platform**: desktop
- **OS**: Ubuntu
- **Version**: 1.7.27
|
1.0
|
Outgoing message is corrupted in local cache; perpetually displayed as newest message - ### Description
This outgoing message is perpetually displayed as the newest message in this room for me, from my desktop Element client:

"View source" gives:
```
{
"type": "m.room.message",
"content": {
"msgtype": "m.text",
"body": "but on the other side it's not actually much entropy once you know this person thinks \"nerdy books\" are good encryption keys"
},
"event_id": "$vzIp4ZPdxzFzmpNIOsAXV7reeAhn0cTDLGahXamggec",
"user_id": "@sporksmith:matrix.org",
"sender": "@sporksmith:matrix.org",
"room_id": "!HRkvwgoHhxxegkVaQY:matrix.org",
"origin_server_ts": 1621294330488
}
```
This seems to be missing some fields? Maybe notably "age"? For comparison, after a bit I thought maybe the message hadn't gone out, and re-sent the same text. "View source" for that one gives:
```
{
"content": {
"body": "but on the other side it's not actually much entropy once you know this person thinks \"nerdy books\" are good encryption keys",
"msgtype": "m.text"
},
"origin_server_ts": 1621297232938,
"room_id": "!HRkvwgoHhxxegkVaQY:matrix.org",
"sender": "@sporksmith:matrix.org",
"type": "m.room.message",
"unsigned": {
"age": 747163236
},
"event_id": "$9Vz-zkzpdXbIBcO5quuhKgsjPMpRVV_3NweNVNlHX1A",
"user_id": "@sporksmith:matrix.org",
"age": 747163236
}
```
The original message appears normally from my other clients (e.g. Element on mobile), and according to others did go out the first time and looked normal, so I think it's somehow corrupted in this client's local cache.
### Steps to reproduce
Haven't been able to reproduce
- Send a message (in an IRC bridged room)
- ??
- Observe outgoing message perpetually displayed as newest in the room
Describe how what happens differs from what you expected:
Logs being sent: yes
### Version information
<!-- IMPORTANT: please answer the following questions, to help us narrow down the problem -->
- **Platform**: desktop
- **OS**: Ubuntu
- **Version**: 1.7.27
|
defect
|
outgoing message is corrupted in local cache perpetually displayed as newest message description this outgoing message is perpetually displayed as the newest message in this room for me from my desktop element client view source gives type m room message content msgtype m text body but on the other side it s not actually much entropy once you know this person thinks nerdy books are good encryption keys event id user id sporksmith matrix org sender sporksmith matrix org room id hrkvwgohhxxegkvaqy matrix org origin server ts this seems to be missing some fields maybe notably age for comparison after a bit i thought maybe the message hadn t gone out and re sent the same text view source for that one gives content body but on the other side it s not actually much entropy once you know this person thinks nerdy books are good encryption keys msgtype m text origin server ts room id hrkvwgohhxxegkvaqy matrix org sender sporksmith matrix org type m room message unsigned age event id user id sporksmith matrix org age the original message appears normally from my other clients e g element on mobile and according to others did go out the first time and looked normal so i think it s somehow corrupted in this client s local cache steps to reproduce haven t been able to reproduce send a message in an irc bridged room observe outgoing message perpetually displayed as newest in the room describe how what happens differs from what you expected logs being sent yes version information platform desktop os ubuntu version
| 1
|
56,154
| 14,950,455,329
|
IssuesEvent
|
2021-01-26 13:06:59
|
PowerDNS/pdns
|
https://api.github.com/repos/PowerDNS/pdns
|
closed
|
dnsdist: setServFailWhenNoServer mirrors EDNS options and query flags
|
defect dnsdist
|
- Program: dnsdist
- Issue type: Bug report
### Short description
Like #6847 / #6348 but for `setServFailWhenNoServer(true)`
### Environment
- Operating system:
- Software version: I checked 1.5.1 and master
### Steps to reproduce
1. start dnsdist with no (up) backends
2. `setServFailWhenNoServer(true)`
3. issue a query with EDNS options (`+cookie +nsid`)
### Expected behaviour
SERVFAIL response with EDNS but without cookie, empty NSID, and my AD flag.
### Actual behaviour
```
; <<>> DiG 9.11.5-P4-5.1+deb10u2-Debian <<>> a example.com @127.0.0.1 -p 5300 +nsid
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: SERVFAIL, id: 44564
;; flags: qr rd ad; QUERY: 1, ANSWER: 0, AUTHORITY: 0, ADDITIONAL: 1
;; WARNING: recursion requested but not available
;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
; NSID
; COOKIE: 935507bf57bc90ae (echoed)
;; QUESTION SECTION:
;example.com. IN A
;; Query time: 0 msec
;; SERVER: 127.0.0.1#5300(127.0.0.1)
;; WHEN: Mon Jan 25 14:10:49 CET 2021
;; MSG SIZE rcvd: 56
```
### Other information
Perhaps the bufsize should also not be mirrored from the query.
|
1.0
|
dnsdist: setServFailWhenNoServer mirrors EDNS options and query flags - - Program: dnsdist
- Issue type: Bug report
### Short description
Like #6847 / #6348 but for `setServFailWhenNoServer(true)`
### Environment
- Operating system:
- Software version: I checked 1.5.1 and master
### Steps to reproduce
1. start dnsdist with no (up) backends
2. `setServFailWhenNoServer(true)`
3. issue a query with EDNS options (`+cookie +nsid`)
### Expected behaviour
SERVFAIL response with EDNS but without cookie, empty NSID, and my AD flag.
### Actual behaviour
```
; <<>> DiG 9.11.5-P4-5.1+deb10u2-Debian <<>> a example.com @127.0.0.1 -p 5300 +nsid
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: SERVFAIL, id: 44564
;; flags: qr rd ad; QUERY: 1, ANSWER: 0, AUTHORITY: 0, ADDITIONAL: 1
;; WARNING: recursion requested but not available
;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
; NSID
; COOKIE: 935507bf57bc90ae (echoed)
;; QUESTION SECTION:
;example.com. IN A
;; Query time: 0 msec
;; SERVER: 127.0.0.1#5300(127.0.0.1)
;; WHEN: Mon Jan 25 14:10:49 CET 2021
;; MSG SIZE rcvd: 56
```
### Other information
Perhaps the bufsize should also not be mirrored from the query.
|
defect
|
dnsdist setservfailwhennoserver mirrors edns options and query flags program dnsdist issue type bug report short description like but for setservfailwhennoserver true environment operating system software version i checked and master steps to reproduce start dnsdist with no up backends setservfailwhennoserver true issue a query with edns options cookie nsid expected behaviour servfail response with edns but without cookie empty nsid and my ad flag actual behaviour dig debian a example com p nsid global options cmd got answer header opcode query status servfail id flags qr rd ad query answer authority additional warning recursion requested but not available opt pseudosection edns version flags udp nsid cookie echoed question section example com in a query time msec server when mon jan cet msg size rcvd other information perhaps the bufsize should also not be mirrored from the query
| 1
|
45,147
| 12,601,793,182
|
IssuesEvent
|
2020-06-11 10:29:29
|
jOOQ/jOOQ
|
https://api.github.com/repos/jOOQ/jOOQ
|
closed
|
Cannot call SQL Server stored procedure on HSQLDB
|
C: DB: HSQLDB C: Functionality E: Enterprise Edition E: Professional Edition P: Medium R: Fixed T: Defect
|
### Expected behavior and actual behavior:
Expected:
Call HSQLDB stored procedure and get the result.
Actual:
```
org.jooq.exception.DataAccessException: SQL [select "dbo"."get_unique_ids"(cast(? as varchar(32672)), cast(? as int), cast(? as bigint)) from (select 1 as dual from information_schema.system_users limit 1) as dual]; user lacks privilege or object not found: ts_allocate_unique_long_id_batch_sp in statement [select "dbo"."get_unique_ids"(cast(? as varchar(32672)), cast(? as int), cast(? as bigint)) from (select 1 as dual from information_schema.system_users limit 1) as dual]
at org.jooq_3.11.2.HSQLDB.debug(Unknown Source)
at org.jooq.impl.Tools.translate(Tools.java:2380)
at org.jooq.impl.DefaultExecuteContext.sqlException(DefaultExecuteContext.java:811)
at org.jooq.impl.AbstractQuery.execute(AbstractQuery.java:364)
at org.jooq.impl.AbstractResultQuery.fetchLazy(AbstractResultQuery.java:393)
at org.jooq.impl.AbstractResultQuery.fetchLazy(AbstractResultQuery.java:380)
at org.jooq.impl.AbstractResultQuery.fetchOne(AbstractResultQuery.java:545)
at org.jooq.impl.AbstractResultQuery.fetchOne(AbstractResultQuery.java:481)
at org.jooq.impl.SelectImpl.fetchOne(SelectImpl.java:2819)
at org.jooq.impl.AbstractRoutine.executeSelect(AbstractRoutine.java:455)
at org.jooq.impl.AbstractRoutine.execute(AbstractRoutine.java:385)
at org.jooq.impl.AbstractRoutine.execute(AbstractRoutine.java:321)
at com.twosigma.ops.fobo.trade.data.generated.Routines.getUniqueIds(Routines.java:49)
at com.twosigma.ops.fobo.trade.data.MyTest.canAllocateUniqueIds(MyTest.java:80)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
at org.junit.runner.JUnitCore.run(JUnitCore.java:137)
at com.intellij.junit4.JUnit4IdeaTestRunner.startRunnerWithArgs(JUnit4IdeaTestRunner.java:68)
at com.intellij.rt.execution.junit.IdeaTestRunner$Repeater.startRunnerWithArgs(IdeaTestRunner.java:47)
at com.intellij.rt.execution.junit.JUnitStarter.prepareStreamsAndStart(JUnitStarter.java:242)
at com.intellij.rt.execution.junit.JUnitStarter.main(JUnitStarter.java:70)
Caused by: java.sql.SQLSyntaxErrorException: user lacks privilege or object not found: get_unique_ids in statement [select "dbo"."get_unique_ids"(cast(? as varchar(32672)), cast(? as int), cast(? as bigint)) from (select 1 as dual from information_schema.system_users limit 1) as dual]
at org.hsqldb.jdbc.JDBCUtil.sqlException(Unknown Source)
at org.hsqldb.jdbc.JDBCUtil.sqlException(Unknown Source)
at org.hsqldb.jdbc.JDBCPreparedStatement.<init>(Unknown Source)
at org.hsqldb.jdbc.JDBCConnection.prepareStatement(Unknown Source)
at com.twosigma.dbpool.ConnectionProxy.prepareStatement(ConnectionProxy.java:206)
at org.jooq.impl.ProviderEnabledConnection.prepareStatement(ProviderEnabledConnection.java:109)
at org.jooq.impl.SettingsEnabledConnection.prepareStatement(SettingsEnabledConnection.java:73)
at org.jooq.impl.AbstractResultQuery.prepare(AbstractResultQuery.java:239)
at org.jooq.impl.AbstractQuery.execute(AbstractQuery.java:322)
... 33 more
Caused by: org.hsqldb.HsqlException: user lacks privilege or object not found: get_unique_ids
at org.hsqldb.error.Error.error(Unknown Source)
at org.hsqldb.error.Error.error(Unknown Source)
at org.hsqldb.ParserDQL.readColumnOrFunctionExpression(Unknown Source)
at org.hsqldb.ParserDQL.XreadSimpleValueExpressionPrimary(Unknown Source)
at org.hsqldb.ParserDQL.XreadAllTypesValueExpressionPrimary(Unknown Source)
at org.hsqldb.ParserDQL.XreadAllTypesPrimary(Unknown Source)
at org.hsqldb.ParserDQL.XreadAllTypesFactor(Unknown Source)
at org.hsqldb.ParserDQL.XreadAllTypesTerm(Unknown Source)
at org.hsqldb.ParserDQL.XreadAllTypesCommonValueExpression(Unknown Source)
at org.hsqldb.ParserDQL.XreadValueExpression(Unknown Source)
at org.hsqldb.ParserDQL.XreadSelect(Unknown Source)
at org.hsqldb.ParserDQL.XreadQuerySpecification(Unknown Source)
at org.hsqldb.ParserDQL.XreadSimpleTable(Unknown Source)
at org.hsqldb.ParserDQL.XreadQueryPrimary(Unknown Source)
at org.hsqldb.ParserDQL.XreadQueryTerm(Unknown Source)
at org.hsqldb.ParserDQL.XreadQueryExpressionBody(Unknown Source)
at org.hsqldb.ParserDQL.XreadQueryExpression(Unknown Source)
at org.hsqldb.ParserDQL.compileCursorSpecification(Unknown Source)
at org.hsqldb.ParserCommand.compilePart(Unknown Source)
at org.hsqldb.ParserCommand.compileStatement(Unknown Source)
at org.hsqldb.Session.compileStatement(Unknown Source)
at org.hsqldb.StatementManager.compile(Unknown Source)
at org.hsqldb.Session.execute(Unknown Source)
... 40 more
```
### Steps to reproduce the problem (include example code if possible):
Inserted a stored procedure using context.execute():
```
DROP PROCEDURE "dbo"."get_unique_ids" IF EXISTS;
CREATE PROCEDURE "dbo"."get_unique_ids"(
IN p_name VARCHAR(128),
IN p_count INT,
OUT ret_id BIGINT
)
MODIFIES SQL DATA
BEGIN ATOMIC
SET ret_id = ...;
END
```
The code for the jOOQ routine was generated from an SQL Server database:
```
IF NOT EXISTS (SELECT * FROM sysobjects AS so WHERE so.name = 'unique_ids')
BEGIN
CREATE TABLE [dbo].[unique_ids] (...) ON [PRIMARY]
END
IF NOT EXISTS (SELECT * FROM sysobjects AS so WHERE so.name = 'get_unique_ids')
BEGIN
EXEC('CREATE PROCEDURE dbo.get_unique_ids (
@name VARCHAR(128),
@count INT,
@id BIGINT OUTPUT)
AS
BEGIN
SET @id = ...
END')
END
```
Call the procedure using the jOOQ routine:
```
DSLContext context = // build my context
Long out = null;
Routines.getUniqueIds(context.configuration(), "test", 4, out);
```
I am not sure what all the "dual" table queries are about, so not sure how to debug this. Please let me know! I'm able to call the stored procedure using the JDBC connection directly.
### Versions:
jOOQ: 3.11.2
Java: 1.8.0_172
OS: Linux
JDBC Driver (include name if inofficial driver): HSQLDB
|
1.0
|
Cannot call SQL Server stored procedure on HSQLDB - ### Expected behavior and actual behavior:
Expected:
Call HSQLDB stored procedure and get the result.
Actual:
```
org.jooq.exception.DataAccessException: SQL [select "dbo"."get_unique_ids"(cast(? as varchar(32672)), cast(? as int), cast(? as bigint)) from (select 1 as dual from information_schema.system_users limit 1) as dual]; user lacks privilege or object not found: ts_allocate_unique_long_id_batch_sp in statement [select "dbo"."get_unique_ids"(cast(? as varchar(32672)), cast(? as int), cast(? as bigint)) from (select 1 as dual from information_schema.system_users limit 1) as dual]
at org.jooq_3.11.2.HSQLDB.debug(Unknown Source)
at org.jooq.impl.Tools.translate(Tools.java:2380)
at org.jooq.impl.DefaultExecuteContext.sqlException(DefaultExecuteContext.java:811)
at org.jooq.impl.AbstractQuery.execute(AbstractQuery.java:364)
at org.jooq.impl.AbstractResultQuery.fetchLazy(AbstractResultQuery.java:393)
at org.jooq.impl.AbstractResultQuery.fetchLazy(AbstractResultQuery.java:380)
at org.jooq.impl.AbstractResultQuery.fetchOne(AbstractResultQuery.java:545)
at org.jooq.impl.AbstractResultQuery.fetchOne(AbstractResultQuery.java:481)
at org.jooq.impl.SelectImpl.fetchOne(SelectImpl.java:2819)
at org.jooq.impl.AbstractRoutine.executeSelect(AbstractRoutine.java:455)
at org.jooq.impl.AbstractRoutine.execute(AbstractRoutine.java:385)
at org.jooq.impl.AbstractRoutine.execute(AbstractRoutine.java:321)
at com.twosigma.ops.fobo.trade.data.generated.Routines.getUniqueIds(Routines.java:49)
at com.twosigma.ops.fobo.trade.data.MyTest.canAllocateUniqueIds(MyTest.java:80)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
at org.junit.runner.JUnitCore.run(JUnitCore.java:137)
at com.intellij.junit4.JUnit4IdeaTestRunner.startRunnerWithArgs(JUnit4IdeaTestRunner.java:68)
at com.intellij.rt.execution.junit.IdeaTestRunner$Repeater.startRunnerWithArgs(IdeaTestRunner.java:47)
at com.intellij.rt.execution.junit.JUnitStarter.prepareStreamsAndStart(JUnitStarter.java:242)
at com.intellij.rt.execution.junit.JUnitStarter.main(JUnitStarter.java:70)
Caused by: java.sql.SQLSyntaxErrorException: user lacks privilege or object not found: get_unique_ids in statement [select "dbo"."get_unique_ids"(cast(? as varchar(32672)), cast(? as int), cast(? as bigint)) from (select 1 as dual from information_schema.system_users limit 1) as dual]
at org.hsqldb.jdbc.JDBCUtil.sqlException(Unknown Source)
at org.hsqldb.jdbc.JDBCUtil.sqlException(Unknown Source)
at org.hsqldb.jdbc.JDBCPreparedStatement.<init>(Unknown Source)
at org.hsqldb.jdbc.JDBCConnection.prepareStatement(Unknown Source)
at com.twosigma.dbpool.ConnectionProxy.prepareStatement(ConnectionProxy.java:206)
at org.jooq.impl.ProviderEnabledConnection.prepareStatement(ProviderEnabledConnection.java:109)
at org.jooq.impl.SettingsEnabledConnection.prepareStatement(SettingsEnabledConnection.java:73)
at org.jooq.impl.AbstractResultQuery.prepare(AbstractResultQuery.java:239)
at org.jooq.impl.AbstractQuery.execute(AbstractQuery.java:322)
... 33 more
Caused by: org.hsqldb.HsqlException: user lacks privilege or object not found: get_unique_ids
at org.hsqldb.error.Error.error(Unknown Source)
at org.hsqldb.error.Error.error(Unknown Source)
at org.hsqldb.ParserDQL.readColumnOrFunctionExpression(Unknown Source)
at org.hsqldb.ParserDQL.XreadSimpleValueExpressionPrimary(Unknown Source)
at org.hsqldb.ParserDQL.XreadAllTypesValueExpressionPrimary(Unknown Source)
at org.hsqldb.ParserDQL.XreadAllTypesPrimary(Unknown Source)
at org.hsqldb.ParserDQL.XreadAllTypesFactor(Unknown Source)
at org.hsqldb.ParserDQL.XreadAllTypesTerm(Unknown Source)
at org.hsqldb.ParserDQL.XreadAllTypesCommonValueExpression(Unknown Source)
at org.hsqldb.ParserDQL.XreadValueExpression(Unknown Source)
at org.hsqldb.ParserDQL.XreadSelect(Unknown Source)
at org.hsqldb.ParserDQL.XreadQuerySpecification(Unknown Source)
at org.hsqldb.ParserDQL.XreadSimpleTable(Unknown Source)
at org.hsqldb.ParserDQL.XreadQueryPrimary(Unknown Source)
at org.hsqldb.ParserDQL.XreadQueryTerm(Unknown Source)
at org.hsqldb.ParserDQL.XreadQueryExpressionBody(Unknown Source)
at org.hsqldb.ParserDQL.XreadQueryExpression(Unknown Source)
at org.hsqldb.ParserDQL.compileCursorSpecification(Unknown Source)
at org.hsqldb.ParserCommand.compilePart(Unknown Source)
at org.hsqldb.ParserCommand.compileStatement(Unknown Source)
at org.hsqldb.Session.compileStatement(Unknown Source)
at org.hsqldb.StatementManager.compile(Unknown Source)
at org.hsqldb.Session.execute(Unknown Source)
... 40 more
```
### Steps to reproduce the problem (include example code if possible):
Inserted a stored procedure using context.execute():
```
DROP PROCEDURE "dbo"."get_unique_ids" IF EXISTS;
CREATE PROCEDURE "dbo"."get_unique_ids"(
IN p_name VARCHAR(128),
IN p_count INT,
OUT ret_id BIGINT
)
MODIFIES SQL DATA
BEGIN ATOMIC
SET ret_id = ...;
END
```
The code for the jOOQ routine was generated from an SQL Server database:
```
IF NOT EXISTS (SELECT * FROM sysobjects AS so WHERE so.name = 'unique_ids')
BEGIN
CREATE TABLE [dbo].[unique_ids] (...) ON [PRIMARY]
END
IF NOT EXISTS (SELECT * FROM sysobjects AS so WHERE so.name = 'get_unique_ids')
BEGIN
EXEC('CREATE PROCEDURE dbo.get_unique_ids (
@name VARCHAR(128),
@count INT,
@id BIGINT OUTPUT)
AS
BEGIN
SET @id = ...
END')
END
```
Call the procedure using the jOOQ routine:
```
DSLContext context = // build my context
Long out = null;
Routines.getUniqueIds(context.configuration(), "test", 4, out);
```
I am not sure what all the "dual" table queries are about, so not sure how to debug this. Please let me know! I'm able to call the stored procedure using the JDBC connection directly.
### Versions:
jOOQ: 3.11.2
Java: 1.8.0_172
OS: Linux
JDBC Driver (include name if inofficial driver): HSQLDB
|
defect
|
cannot call sql server stored procedure on hsqldb expected behavior and actual behavior expected call hsqldb stored procedure and get the result actual org jooq exception dataaccessexception sql user lacks privilege or object not found ts allocate unique long id batch sp in statement at org jooq hsqldb debug unknown source at org jooq impl tools translate tools java at org jooq impl defaultexecutecontext sqlexception defaultexecutecontext java at org jooq impl abstractquery execute abstractquery java at org jooq impl abstractresultquery fetchlazy abstractresultquery java at org jooq impl abstractresultquery fetchlazy abstractresultquery java at org jooq impl abstractresultquery fetchone abstractresultquery java at org jooq impl abstractresultquery fetchone abstractresultquery java at org jooq impl selectimpl fetchone selectimpl java at org jooq impl abstractroutine executeselect abstractroutine java at org jooq impl abstractroutine execute abstractroutine java at org jooq impl abstractroutine execute abstractroutine java at com twosigma ops fobo trade data generated routines getuniqueids routines java at com twosigma ops fobo trade data mytest canallocateuniqueids mytest java at sun reflect nativemethodaccessorimpl native method at sun reflect nativemethodaccessorimpl invoke nativemethodaccessorimpl java at sun reflect delegatingmethodaccessorimpl invoke delegatingmethodaccessorimpl java at java lang reflect method invoke method java at org junit runners model frameworkmethod runreflectivecall frameworkmethod java at org junit internal runners model reflectivecallable run reflectivecallable java at org junit runners model frameworkmethod invokeexplosively frameworkmethod java at org junit internal runners statements invokemethod evaluate invokemethod java at org junit internal runners statements runbefores evaluate runbefores java at org junit runners parentrunner runleaf parentrunner java at org junit runners runchild java at org junit runners runchild java at org junit runners parentrunner run parentrunner java at org junit runners parentrunner schedule parentrunner java at org junit runners parentrunner runchildren parentrunner java at org junit runners parentrunner access parentrunner java at org junit runners parentrunner evaluate parentrunner java at org junit runners parentrunner run parentrunner java at org junit runner junitcore run junitcore java at com intellij startrunnerwithargs java at com intellij rt execution junit ideatestrunner repeater startrunnerwithargs ideatestrunner java at com intellij rt execution junit junitstarter preparestreamsandstart junitstarter java at com intellij rt execution junit junitstarter main junitstarter java caused by java sql sqlsyntaxerrorexception user lacks privilege or object not found get unique ids in statement at org hsqldb jdbc jdbcutil sqlexception unknown source at org hsqldb jdbc jdbcutil sqlexception unknown source at org hsqldb jdbc jdbcpreparedstatement unknown source at org hsqldb jdbc jdbcconnection preparestatement unknown source at com twosigma dbpool connectionproxy preparestatement connectionproxy java at org jooq impl providerenabledconnection preparestatement providerenabledconnection java at org jooq impl settingsenabledconnection preparestatement settingsenabledconnection java at org jooq impl abstractresultquery prepare abstractresultquery java at org jooq impl abstractquery execute abstractquery java more caused by org hsqldb hsqlexception user lacks privilege or object not found get unique ids at org hsqldb error error error unknown source at org hsqldb error error error unknown source at org hsqldb parserdql readcolumnorfunctionexpression unknown source at org hsqldb parserdql xreadsimplevalueexpressionprimary unknown source at org hsqldb parserdql xreadalltypesvalueexpressionprimary unknown source at org hsqldb parserdql xreadalltypesprimary unknown source at org hsqldb parserdql xreadalltypesfactor unknown source at org hsqldb parserdql xreadalltypesterm unknown source at org hsqldb parserdql xreadalltypescommonvalueexpression unknown source at org hsqldb parserdql xreadvalueexpression unknown source at org hsqldb parserdql xreadselect unknown source at org hsqldb parserdql xreadqueryspecification unknown source at org hsqldb parserdql xreadsimpletable unknown source at org hsqldb parserdql xreadqueryprimary unknown source at org hsqldb parserdql xreadqueryterm unknown source at org hsqldb parserdql xreadqueryexpressionbody unknown source at org hsqldb parserdql xreadqueryexpression unknown source at org hsqldb parserdql compilecursorspecification unknown source at org hsqldb parsercommand compilepart unknown source at org hsqldb parsercommand compilestatement unknown source at org hsqldb session compilestatement unknown source at org hsqldb statementmanager compile unknown source at org hsqldb session execute unknown source more steps to reproduce the problem include example code if possible inserted a stored procedure using context execute drop procedure dbo get unique ids if exists create procedure dbo get unique ids in p name varchar in p count int out ret id bigint modifies sql data begin atomic set ret id end the code for the jooq routine was generated from an sql server database if not exists select from sysobjects as so where so name unique ids begin create table on end if not exists select from sysobjects as so where so name get unique ids begin exec create procedure dbo get unique ids name varchar count int id bigint output as begin set id end end call the procedure using the jooq routine dslcontext context build my context long out null routines getuniqueids context configuration test out i am not sure what all the dual table queries are about so not sure how to debug this please let me know i m able to call the stored procedure using the jdbc connection directly versions jooq java os linux jdbc driver include name if inofficial driver hsqldb
| 1
|
107,793
| 11,571,393,083
|
IssuesEvent
|
2020-02-20 21:28:10
|
dewittpe/ensr
|
https://api.github.com/repos/dewittpe/ensr
|
opened
|
ensr-dataset vignette updates
|
documentation enhancement
|
- [ ] remove all backticks
- [ ] build dynamic lines, no hard coding values
|
1.0
|
ensr-dataset vignette updates - - [ ] remove all backticks
- [ ] build dynamic lines, no hard coding values
|
non_defect
|
ensr dataset vignette updates remove all backticks build dynamic lines no hard coding values
| 0
|
33,751
| 9,204,420,160
|
IssuesEvent
|
2019-03-08 07:18:31
|
qissue-bot/QGIS
|
https://api.github.com/repos/qissue-bot/QGIS
|
closed
|
version 0.8 crashes before opening on mac intel.
|
Category: Build/Install Component: Affected QGIS version Component: Crashes QGIS or corrupts data Component: Easy fix? Component: Operating System Component: Pull Request or Patch supplied Component: Regression? Component: Resolution Priority: Low Project: QGIS Application Status: Closed Tracker: Bug report
|
---
Author Name: **anonymous -** (anonymous -)
Original Redmine Issue: 504, https://issues.qgis.org/issues/504
Original Assignee: nobody -
---
i am using an intel mac with mac ox 10.4.8 with grass 6.1. i have been using the preview version of .8 with full grass support and everything was working right. however when i download the full version of .8 it could not even start.
thanks
|
1.0
|
version 0.8 crashes before opening on mac intel. - ---
Author Name: **anonymous -** (anonymous -)
Original Redmine Issue: 504, https://issues.qgis.org/issues/504
Original Assignee: nobody -
---
i am using an intel mac with mac ox 10.4.8 with grass 6.1. i have been using the preview version of .8 with full grass support and everything was working right. however when i download the full version of .8 it could not even start.
thanks
|
non_defect
|
version crashes before opening on mac intel author name anonymous anonymous original redmine issue original assignee nobody i am using an intel mac with mac ox with grass i have been using the preview version of with full grass support and everything was working right however when i download the full version of it could not even start thanks
| 0
|
57,035
| 15,606,079,280
|
IssuesEvent
|
2021-03-19 07:26:24
|
primefaces/primereact
|
https://api.github.com/repos/primefaces/primereact
|
closed
|
Password component doesn't visualize properly the value of the variable serving as it's value when it is programmatically changed
|
defect
|
**I'm submitting a**
[X] bug report
**Codesandbox Case (Bug Reports)**
I am using the sandbox provided in issue #1854, but I have edited it a little to showcase the exact behavior.
https://codesandbox.io/s/pasword-test-primereact-forked-mpe9e?file=/src/demo/PasswordDemo.jsqjx332qq4
**Current behavior**
If you have a **Password** component and it's value is connected to a variable, when you change the value of the variable programmatically (For example if you have a form with username and password and you want to delete the entered values when the user submits, meaning username="" , password=""), in the **Password** component visually the old value stays, represented by start - ***- even after you have set the password value to an empty string. In the demo provided in the sandbox, the value of the **Password** component is set to "test123" in the beginning and then it is changed through the InputBox below and also clicking the button sets the value to an empty array.
**Expected behavior**
When the value of the **Password** component is **changed** programmatically, it should be shown in the component respectively.
**Please tell us about your environment:**
OS - macOS BigSur 11.2.3
IDE - Visual Studio Code
Package manager - yarn
* **React version:**
17.0.1
* **PrimeReact version:**
6.2.1
* **Browser:** all
|
1.0
|
Password component doesn't visualize properly the value of the variable serving as it's value when it is programmatically changed - **I'm submitting a**
[X] bug report
**Codesandbox Case (Bug Reports)**
I am using the sandbox provided in issue #1854, but I have edited it a little to showcase the exact behavior.
https://codesandbox.io/s/pasword-test-primereact-forked-mpe9e?file=/src/demo/PasswordDemo.jsqjx332qq4
**Current behavior**
If you have a **Password** component and it's value is connected to a variable, when you change the value of the variable programmatically (For example if you have a form with username and password and you want to delete the entered values when the user submits, meaning username="" , password=""), in the **Password** component visually the old value stays, represented by start - ***- even after you have set the password value to an empty string. In the demo provided in the sandbox, the value of the **Password** component is set to "test123" in the beginning and then it is changed through the InputBox below and also clicking the button sets the value to an empty array.
**Expected behavior**
When the value of the **Password** component is **changed** programmatically, it should be shown in the component respectively.
**Please tell us about your environment:**
OS - macOS BigSur 11.2.3
IDE - Visual Studio Code
Package manager - yarn
* **React version:**
17.0.1
* **PrimeReact version:**
6.2.1
* **Browser:** all
|
defect
|
password component doesn t visualize properly the value of the variable serving as it s value when it is programmatically changed i m submitting a bug report codesandbox case bug reports i am using the sandbox provided in issue but i have edited it a little to showcase the exact behavior current behavior if you have a password component and it s value is connected to a variable when you change the value of the variable programmatically for example if you have a form with username and password and you want to delete the entered values when the user submits meaning username password in the password component visually the old value stays represented by start even after you have set the password value to an empty string in the demo provided in the sandbox the value of the password component is set to in the beginning and then it is changed through the inputbox below and also clicking the button sets the value to an empty array expected behavior when the value of the password component is changed programmatically it should be shown in the component respectively please tell us about your environment os macos bigsur ide visual studio code package manager yarn react version primereact version browser all
| 1
|
36,793
| 8,136,453,636
|
IssuesEvent
|
2018-08-20 08:28:37
|
primefaces/primefaces
|
https://api.github.com/repos/primefaces/primefaces
|
closed
|
DataTable: toggleSelect selecting too many rows
|
6.2.9 defect
|
Creating this issue as requested in ticket https://github.com/primefaces/primefaces/issues/3864. All important information are in this comment in referenced ticket: https://github.com/primefaces/primefaces/issues/3864#issuecomment-411698394
## 1) Environment
- PrimeFaces version: 6.2.8
- Does it work on the newest released PrimeFaces version? Version? No
- Does it work on the newest sources in GitHub? (Build by source -> https://github.com/primefaces/primefaces/wiki/Building-From-Source) No
- Application server + version: any
- Affected browsers: any
## 2) Expected behavior
The same as in PF 6.2.6. Click on ToggleSelect should select all rows on current page in datatable. It should ignore rows for which disableSelection attribute evaluates to TRUE.
## 3) Actual behavior
All rows in datatable are selected across all pages including "disabled" ones using disableSelection.
## 4) Steps to reproduce
Will provide if needed.
## 5) Sample XHTML
Will provide if needed.
## 6) Sample bean
Will provide if needed.
|
1.0
|
DataTable: toggleSelect selecting too many rows - Creating this issue as requested in ticket https://github.com/primefaces/primefaces/issues/3864. All important information are in this comment in referenced ticket: https://github.com/primefaces/primefaces/issues/3864#issuecomment-411698394
## 1) Environment
- PrimeFaces version: 6.2.8
- Does it work on the newest released PrimeFaces version? Version? No
- Does it work on the newest sources in GitHub? (Build by source -> https://github.com/primefaces/primefaces/wiki/Building-From-Source) No
- Application server + version: any
- Affected browsers: any
## 2) Expected behavior
The same as in PF 6.2.6. Click on ToggleSelect should select all rows on current page in datatable. It should ignore rows for which disableSelection attribute evaluates to TRUE.
## 3) Actual behavior
All rows in datatable are selected across all pages including "disabled" ones using disableSelection.
## 4) Steps to reproduce
Will provide if needed.
## 5) Sample XHTML
Will provide if needed.
## 6) Sample bean
Will provide if needed.
|
defect
|
datatable toggleselect selecting too many rows creating this issue as requested in ticket all important information are in this comment in referenced ticket environment primefaces version does it work on the newest released primefaces version version no does it work on the newest sources in github build by source no application server version any affected browsers any expected behavior the same as in pf click on toggleselect should select all rows on current page in datatable it should ignore rows for which disableselection attribute evaluates to true actual behavior all rows in datatable are selected across all pages including disabled ones using disableselection steps to reproduce will provide if needed sample xhtml will provide if needed sample bean will provide if needed
| 1
|
126,773
| 12,299,393,013
|
IssuesEvent
|
2020-05-11 12:18:20
|
digital-asset/daml
|
https://api.github.com/repos/digital-asset/daml
|
closed
|
Update contract key docs
|
component/documentation
|
The conteact key documentation https://docs.daml.com/daml/reference/contract-keys.html#contract-keys-functions currently has several false statements about authorisation an future plans. This needs to be tidied up.
|
1.0
|
Update contract key docs - The conteact key documentation https://docs.daml.com/daml/reference/contract-keys.html#contract-keys-functions currently has several false statements about authorisation an future plans. This needs to be tidied up.
|
non_defect
|
update contract key docs the conteact key documentation currently has several false statements about authorisation an future plans this needs to be tidied up
| 0
|
62,207
| 17,023,872,659
|
IssuesEvent
|
2021-07-03 04:17:50
|
tomhughes/trac-tickets
|
https://api.github.com/repos/tomhughes/trac-tickets
|
closed
|
long permalink does not add notes layer
|
Component: admin Priority: major Resolution: fixed Type: defect
|
**[Submitted to the original trac issue database at 6.52am, Tuesday, 6th August 2013]**
enable notes layer, get long permalink/url - it does not include "N" for layers, thus notes layer is not visible when this permalink is used. adding N manually shows notes layer
|
1.0
|
long permalink does not add notes layer - **[Submitted to the original trac issue database at 6.52am, Tuesday, 6th August 2013]**
enable notes layer, get long permalink/url - it does not include "N" for layers, thus notes layer is not visible when this permalink is used. adding N manually shows notes layer
|
defect
|
long permalink does not add notes layer enable notes layer get long permalink url it does not include n for layers thus notes layer is not visible when this permalink is used adding n manually shows notes layer
| 1
|
13,851
| 2,789,086,701
|
IssuesEvent
|
2015-05-08 17:19:01
|
bridgedotnet/Bridge
|
https://api.github.com/repos/bridgedotnet/Bridge
|
closed
|
Window.RequestAnimationFrame return type should be 'long' not 'int'
|
defect in progress ready
|
According to documentation at [MDN - Window.requestAnimationFrame](https://developer.mozilla.org/en-US/docs/Web/API/window/requestAnimationFrame), and intellisense documentation itself, return type for `Bridge.Html5.Window.RequestAnimationFrame` should be `long`. It is, instead, `int` in the code.
|
1.0
|
Window.RequestAnimationFrame return type should be 'long' not 'int' - According to documentation at [MDN - Window.requestAnimationFrame](https://developer.mozilla.org/en-US/docs/Web/API/window/requestAnimationFrame), and intellisense documentation itself, return type for `Bridge.Html5.Window.RequestAnimationFrame` should be `long`. It is, instead, `int` in the code.
|
defect
|
window requestanimationframe return type should be long not int according to documentation at and intellisense documentation itself return type for bridge window requestanimationframe should be long it is instead int in the code
| 1
|
40,659
| 10,101,309,558
|
IssuesEvent
|
2019-07-29 08:26:25
|
spacchetti/spago
|
https://api.github.com/repos/spacchetti/spago
|
closed
|
Local dependency paths are incorrect when `packages.dhall` is in another directory
|
blocked defect in progress
|
I'm rolling out Spago in a monorepo where we have multiple packages, each with their own `spago.dhall`, and a single `packages.dhall` so that all the dependencies stay in sync.
The folder structure looks like:
```
- packages.dhall
- package-a
- spago.dhall
- package-b
- spago.dhall
```
The `packages.dhall` contains something like:
```
let additions = {
package-a =
mkPackage
./package-a/spago.dhall
"./package-a"
"v1.0.0"
, package-b =
mkPackage
./package-b/spago.dhall
"./package-b"
"v1.0.0"
}
```
And then `package-a` lists `package-b` as a dependency in its `spago.dhall`.
`package-b`'s path is listed as `"./package-b"` and this is read by Spago and passed straight to the compiler (with `src/**/*.purs` appended). However, because the `packages.dhall` has been moved up one level this doesn't resolve: the actual path to `b` from `a` should be `../package-b`.
I put together an SSCCE here that you can run: https://github.com/elliotdavies/spago-dependencies-example
My proposed solution would be to modify Spago so that it checks the location of `packages.dhall` and generates the correct relative path. I'm happy to do that work if you think it's a good idea!
|
1.0
|
Local dependency paths are incorrect when `packages.dhall` is in another directory - I'm rolling out Spago in a monorepo where we have multiple packages, each with their own `spago.dhall`, and a single `packages.dhall` so that all the dependencies stay in sync.
The folder structure looks like:
```
- packages.dhall
- package-a
- spago.dhall
- package-b
- spago.dhall
```
The `packages.dhall` contains something like:
```
let additions = {
package-a =
mkPackage
./package-a/spago.dhall
"./package-a"
"v1.0.0"
, package-b =
mkPackage
./package-b/spago.dhall
"./package-b"
"v1.0.0"
}
```
And then `package-a` lists `package-b` as a dependency in its `spago.dhall`.
`package-b`'s path is listed as `"./package-b"` and this is read by Spago and passed straight to the compiler (with `src/**/*.purs` appended). However, because the `packages.dhall` has been moved up one level this doesn't resolve: the actual path to `b` from `a` should be `../package-b`.
I put together an SSCCE here that you can run: https://github.com/elliotdavies/spago-dependencies-example
My proposed solution would be to modify Spago so that it checks the location of `packages.dhall` and generates the correct relative path. I'm happy to do that work if you think it's a good idea!
|
defect
|
local dependency paths are incorrect when packages dhall is in another directory i m rolling out spago in a monorepo where we have multiple packages each with their own spago dhall and a single packages dhall so that all the dependencies stay in sync the folder structure looks like packages dhall package a spago dhall package b spago dhall the packages dhall contains something like let additions package a mkpackage package a spago dhall package a package b mkpackage package b spago dhall package b and then package a lists package b as a dependency in its spago dhall package b s path is listed as package b and this is read by spago and passed straight to the compiler with src purs appended however because the packages dhall has been moved up one level this doesn t resolve the actual path to b from a should be package b i put together an sscce here that you can run my proposed solution would be to modify spago so that it checks the location of packages dhall and generates the correct relative path i m happy to do that work if you think it s a good idea
| 1
|
115,405
| 17,313,864,901
|
IssuesEvent
|
2021-07-27 01:22:41
|
dreamboy9/fuchsia
|
https://api.github.com/repos/dreamboy9/fuchsia
|
opened
|
CVE-2021-32715 (Medium) detected in hyper-0.13.6.crate, hyper-0.13.2.crate
|
security vulnerability
|
## CVE-2021-32715 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>hyper-0.13.6.crate</b>, <b>hyper-0.13.2.crate</b></p></summary>
<p>
<details><summary><b>hyper-0.13.6.crate</b></p></summary>
<p>A fast and correct HTTP library.</p>
<p>Library home page: <a href="https://crates.io/api/v1/crates/hyper/0.13.6/download">https://crates.io/api/v1/crates/hyper/0.13.6/download</a></p>
<p>
Dependency Hierarchy:
- :x: **hyper-0.13.6.crate** (Vulnerable Library)
</details>
<details><summary><b>hyper-0.13.2.crate</b></p></summary>
<p>A fast and correct HTTP library.</p>
<p>Library home page: <a href="https://crates.io/api/v1/crates/hyper/0.13.2/download">https://crates.io/api/v1/crates/hyper/0.13.2/download</a></p>
<p>
Dependency Hierarchy:
- :x: **hyper-0.13.2.crate** (Vulnerable Library)
</details>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
hyper is an HTTP library for rust. hyper's HTTP/1 server code had a flaw that incorrectly parses and accepts requests with a `Content-Length` header with a prefixed plus sign, when it should have been rejected as illegal. This combined with an upstream HTTP proxy that doesn't parse such `Content-Length` headers, but forwards them, can result in "request smuggling" or "desync attacks". The flaw exists in all prior versions of hyper prior to 0.14.10, if built with `rustc` v1.5.0 or newer. The vulnerability is patched in hyper version 0.14.10. Two workarounds exist: One may reject requests manually that contain a plus sign prefix in the `Content-Length` header or ensure any upstream proxy handles `Content-Length` headers with a plus sign prefix.
<p>Publish Date: 2021-07-07
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-32715>CVE-2021-32715</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.3</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-32715">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-32715</a></p>
<p>Release Date: 2021-07-07</p>
<p>Fix Resolution: hyper - 0.14.10</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2021-32715 (Medium) detected in hyper-0.13.6.crate, hyper-0.13.2.crate - ## CVE-2021-32715 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>hyper-0.13.6.crate</b>, <b>hyper-0.13.2.crate</b></p></summary>
<p>
<details><summary><b>hyper-0.13.6.crate</b></p></summary>
<p>A fast and correct HTTP library.</p>
<p>Library home page: <a href="https://crates.io/api/v1/crates/hyper/0.13.6/download">https://crates.io/api/v1/crates/hyper/0.13.6/download</a></p>
<p>
Dependency Hierarchy:
- :x: **hyper-0.13.6.crate** (Vulnerable Library)
</details>
<details><summary><b>hyper-0.13.2.crate</b></p></summary>
<p>A fast and correct HTTP library.</p>
<p>Library home page: <a href="https://crates.io/api/v1/crates/hyper/0.13.2/download">https://crates.io/api/v1/crates/hyper/0.13.2/download</a></p>
<p>
Dependency Hierarchy:
- :x: **hyper-0.13.2.crate** (Vulnerable Library)
</details>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
hyper is an HTTP library for rust. hyper's HTTP/1 server code had a flaw that incorrectly parses and accepts requests with a `Content-Length` header with a prefixed plus sign, when it should have been rejected as illegal. This combined with an upstream HTTP proxy that doesn't parse such `Content-Length` headers, but forwards them, can result in "request smuggling" or "desync attacks". The flaw exists in all prior versions of hyper prior to 0.14.10, if built with `rustc` v1.5.0 or newer. The vulnerability is patched in hyper version 0.14.10. Two workarounds exist: One may reject requests manually that contain a plus sign prefix in the `Content-Length` header or ensure any upstream proxy handles `Content-Length` headers with a plus sign prefix.
<p>Publish Date: 2021-07-07
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-32715>CVE-2021-32715</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.3</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-32715">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-32715</a></p>
<p>Release Date: 2021-07-07</p>
<p>Fix Resolution: hyper - 0.14.10</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_defect
|
cve medium detected in hyper crate hyper crate cve medium severity vulnerability vulnerable libraries hyper crate hyper crate hyper crate a fast and correct http library library home page a href dependency hierarchy x hyper crate vulnerable library hyper crate a fast and correct http library library home page a href dependency hierarchy x hyper crate vulnerable library found in base branch master vulnerability details hyper is an http library for rust hyper s http server code had a flaw that incorrectly parses and accepts requests with a content length header with a prefixed plus sign when it should have been rejected as illegal this combined with an upstream http proxy that doesn t parse such content length headers but forwards them can result in request smuggling or desync attacks the flaw exists in all prior versions of hyper prior to if built with rustc or newer the vulnerability is patched in hyper version two workarounds exist one may reject requests manually that contain a plus sign prefix in the content length header or ensure any upstream proxy handles content length headers with a plus sign prefix publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact low availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution hyper step up your open source security game with whitesource
| 0
|
56,422
| 15,083,817,856
|
IssuesEvent
|
2021-02-05 16:19:39
|
openzfs/zfs
|
https://api.github.com/repos/openzfs/zfs
|
opened
|
Renaming symbolic link with project quota raise EXDEV [Errno 18] Invalid cross-device link
|
Status: Triage Needed Type: Defect
|
### System information
--- | ---
Distribution Name | Debian
Distribution Version | Buster
Linux Kernel | 5.4.34-1-pve
Architecture | x86_64
ZFS Version | 0.8.5-pve1
SPL Version | 0.8.3-pve1
### Describe the problem you're observing
When project quota is enabled, calling os.rename on a symbolik link raise EXDEV (18) [Errno 18] Invalid cross-device link
### Describe how to reproduce the problem
```
ln -s /usr/bin/bash test
python -c "import os; os.rename('test', 'foo')"
WORKING !
chattr +P .
chattr -p 3 .
ln -s /usr/bin/bash test
python -c "import os; os.rename('test', 'foo')"
Traceback (most recent call last):
File "<string>", line 1, in <module>
OSError: [Errno 18] Invalid cross-device link
```
|
1.0
|
Renaming symbolic link with project quota raise EXDEV [Errno 18] Invalid cross-device link -
### System information
--- | ---
Distribution Name | Debian
Distribution Version | Buster
Linux Kernel | 5.4.34-1-pve
Architecture | x86_64
ZFS Version | 0.8.5-pve1
SPL Version | 0.8.3-pve1
### Describe the problem you're observing
When project quota is enabled, calling os.rename on a symbolik link raise EXDEV (18) [Errno 18] Invalid cross-device link
### Describe how to reproduce the problem
```
ln -s /usr/bin/bash test
python -c "import os; os.rename('test', 'foo')"
WORKING !
chattr +P .
chattr -p 3 .
ln -s /usr/bin/bash test
python -c "import os; os.rename('test', 'foo')"
Traceback (most recent call last):
File "<string>", line 1, in <module>
OSError: [Errno 18] Invalid cross-device link
```
|
defect
|
renaming symbolic link with project quota raise exdev invalid cross device link system information distribution name debian distribution version buster linux kernel pve architecture zfs version spl version describe the problem you re observing when project quota is enabled calling os rename on a symbolik link raise exdev invalid cross device link describe how to reproduce the problem ln s usr bin bash test python c import os os rename test foo working chattr p chattr p ln s usr bin bash test python c import os os rename test foo traceback most recent call last file line in oserror invalid cross device link
| 1
|
58,676
| 16,680,259,690
|
IssuesEvent
|
2021-06-07 22:14:36
|
dkfans/keeperfx
|
https://api.github.com/repos/dkfans/keeperfx
|
closed
|
REMOVE_SACRIFICE_RECIPE only partially functional
|
Priority-Medium Type-Defect
|
The script command works for most sacrifices, but these two don't work:
`REMOVE_SACRIFICE_RECIPE(IMP)`
If you use this, the voice indicates it's not a sacrifice recipe, but the imps still get cheaper.
`REMOVE_SACRIFICE_RECIPE(TROLL, BILE_DEMON, DARK_MISTRESS)`
If you use this in a level script, it does nothing but post a warning message in the log:
> Warning: set_sacrifice_recipe_process: Unable to find sacrifice rule to remove
|
1.0
|
REMOVE_SACRIFICE_RECIPE only partially functional - The script command works for most sacrifices, but these two don't work:
`REMOVE_SACRIFICE_RECIPE(IMP)`
If you use this, the voice indicates it's not a sacrifice recipe, but the imps still get cheaper.
`REMOVE_SACRIFICE_RECIPE(TROLL, BILE_DEMON, DARK_MISTRESS)`
If you use this in a level script, it does nothing but post a warning message in the log:
> Warning: set_sacrifice_recipe_process: Unable to find sacrifice rule to remove
|
defect
|
remove sacrifice recipe only partially functional the script command works for most sacrifices but these two don t work remove sacrifice recipe imp if you use this the voice indicates it s not a sacrifice recipe but the imps still get cheaper remove sacrifice recipe troll bile demon dark mistress if you use this in a level script it does nothing but post a warning message in the log warning set sacrifice recipe process unable to find sacrifice rule to remove
| 1
|
37,677
| 8,474,794,793
|
IssuesEvent
|
2018-10-24 17:06:59
|
brainvisa/testbidon
|
https://api.github.com/repos/brainvisa/testbidon
|
closed
|
Window7: somadicom plugin build fail
|
Category: soma-io Component: Resolution Priority: Urgent Status: Closed Tracker: Defect
|
---
Author Name: **Souedet, Nicolas** (Souedet, Nicolas)
Original Redmine Issue: 12893, https://bioproj.extra.cea.fr/redmine/issues/12893
Original Date: 2015-07-03
Original Assignee: Souedet, Nicolas
---
```
[ 40%] Building CXX object build_files/soma-io/src/somadicom/CMakeFiles/somadicom.dir/checker/dicomformatchecker.cc.obj
cd /C/bv/build-trunk-windows-xp-i686-release/build_files/soma-io/src/somadicom && /C/msys/1.0/MinGW/bin/g++.exe -Dsomadicom_EXPORTS -DUSE_SHARE_CONFIG -D_REENTRANT -DSOMA_IO_DICOM -DHAVE_CONFIG_H -IC:/msys/1.0/local/regex-2.7/include -O3 -DNDEBUG -I/C/bv/build-trunk-windows-xp-i686-release/include -I/c/msys/1.0/local/libsigc++-2.1.1/include/sigc++-2.0 -I/c/msys/1.0/local/libsigc++-2.1.1/lib/sigc++-2.0/include -I/C/msys/1.0/local/libxml2-2.7.8/include -I/C/msys/1.0/local/boost-1.51.0/include/boost-1_51 -I/C/msys/1.0/local/dcmtk-3.5.4/include -I/C/msys/1.0/local/dcmtk-3.5.4/include/dcmtk -I/C/msys/1.0/local/dcmtk-3.5.4/include/dcmtk/dcmtk/config -I/C/msys/1.0/local/dcmtk-3.5.4/include/dcmtk/dcmdata -I/C/msys/1.0/local/dcmtk-3.5.4/include/dcmtk/dcmnet -I/C/msys/1.0/local/dcmtk-3.5.4/include/dcmtk/dcmtls -I/C/msys/1.0/local/dcmtk-3.5.4/include/dcmtk/dcmimgle -I/C/msys/1.0/local/dcmtk-3.5.4/include/dcmtk/dcmjpeg -I/C/msys/1.0/local/dcmtk-3.5.4/include/dcmtk/ofstd -o CMakeFiles/somadicom.dir/checker/dicomformatchecker.cc.obj -c /C/bv/soma/soma-io/trunk/src/somadicom/checker/dicomformatchecker.cc
In file included from c:/bv/build-trunk-windows-xp-i686-release/include/soma-io/Dicom/DatasetModule.h:8:0,
from c:/bv/build-trunk-windows-xp-i686-release/include/soma-io/Dicom/DicomIO.h:8,
from c:/bv/soma/soma-io/trunk/src/somadicom/checker/dicomformatchecker.cc:36:
c:/bv/build-trunk-windows-xp-i686-release/include/soma-io/Utils/StdInt.h:11:17: error: conflicting declaration 'typedef char int8_t'
In file included from c:\msys\1.0\mingw\bin\../lib/gcc/i686-w64-mingw32/4.7.3/include/stdint.h:3:0,
from c:/bv/build-trunk-windows-xp-i686-release/include/cartobase/config/cartobase_config.h:125,
from c:/bv/build-trunk-windows-xp-i686-release/include/cartobase/object/object.h:37,
from c:/bv/build-trunk-windows-xp-i686-release/include/soma-io/checker/formatchecker.h:38,
from c:/bv/build-trunk-windows-xp-i686-release/include/soma-io/checker/dicomformatchecker.h:38,
from c:/bv/soma/soma-io/trunk/src/somadicom/checker/dicomformatchecker.cc:35:
c:\msys\1.0\mingw\bin\../lib/gcc/i686-w64-mingw32/4.7.3/../../../../i686-w64-mingw32/include/stdint.h:35:21: error: 'int8_t' has a previous declaration as 'typedef signed char int8_t'
In file included from c:/bv/build-trunk-windows-xp-i686-release/include/soma-io/Dicom/DatasetModule.h:8:0,
from c:/bv/build-trunk-windows-xp-i686-release/include/soma-io/Dicom/DicomIO.h:8,
from c:/bv/soma/soma-io/trunk/src/somadicom/checker/dicomformatchecker.cc:36:
c:/bv/build-trunk-windows-xp-i686-release/include/soma-io/Utils/StdInt.h:22:26: error: conflicting declaration 'typedef unsigned int ssize_t'
In file included from c:\msys\1.0\mingw\bin\../lib/gcc/i686-w64-mingw32/4.7.3/../../../../i686-w64-mingw32/include/crtdefs.h:10:0,
from c:\msys\1.0\mingw\bin\../lib/gcc/i686-w64-mingw32/4.7.3/../../../../i686-w64-mingw32/include/stdint.h:28,
from c:\msys\1.0\mingw\bin\../lib/gcc/i686-w64-mingw32/4.7.3/include/stdint.h:3,
from c:/bv/build-trunk-windows-xp-i686-release/include/cartobase/config/cartobase_config.h:125,
from c:/bv/build-trunk-windows-xp-i686-release/include/cartobase/object/object.h:37,
from c:/bv/build-trunk-windows-xp-i686-release/include/soma-io/checker/formatchecker.h:38,
from c:/bv/build-trunk-windows-xp-i686-release/include/soma-io/checker/dicomformatchecker.h:38,
from c:/bv/soma/soma-io/trunk/src/somadicom/checker/dicomformatchecker.cc:35:
c:\msys\1.0\mingw\bin\../lib/gcc/i686-w64-mingw32/4.7.3/../../../../i686-w64-mingw32/include/_mingw.h:389:13: error: 'ssize_t' has a previous declaration as 'typedef int ssize_t'
In file included from c:/bv/build-trunk-windows-xp-i686-release/include/soma-io/Dicom/DatasetModule.h:8:0,
from c:/bv/build-trunk-windows-xp-i686-release/include/soma-io/Dicom/DicomIO.h:8,
from c:/bv/soma/soma-io/trunk/src/somadicom/checker/dicomformatchecker.cc:36:
c:/bv/build-trunk-windows-xp-i686-release/include/soma-io/Utils/StdInt.h:25:14: error: conflicting declaration 'typedef long int off_t'
In file included from c:\msys\1.0\mingw\bin\../lib/gcc/i686-w64-mingw32/4.7.3/../../../../i686-w64-mingw32/include/wchar.h:379:0,
from c:\msys\1.0\mingw\bin\../lib/gcc/i686-w64-mingw32/4.7.3/include/c++/cwchar:46,
from c:\msys\1.0\mingw\bin\../lib/gcc/i686-w64-mingw32/4.7.3/include/c++/bits/postypes.h:42,
from c:\msys\1.0\mingw\bin\../lib/gcc/i686-w64-mingw32/4.7.3/include/c++/iosfwd:42,
from c:\msys\1.0\mingw\bin\../lib/gcc/i686-w64-mingw32/4.7.3/include/c++/ios:39,
from c:\msys\1.0\mingw\bin\../lib/gcc/i686-w64-mingw32/4.7.3/include/c++/ostream:40,
from c:\msys\1.0\mingw\bin\../lib/gcc/i686-w64-mingw32/4.7.3/include/c++/iostream:40,
from c:/bv/build-trunk-windows-xp-i686-release/include/cartobase/smart/rcptr.h:40,
from c:/bv/build-trunk-windows-xp-i686-release/include/cartobase/type/types.h:38,
from c:/bv/build-trunk-windows-xp-i686-release/include/cartobase/object/object.h:38,
from c:/bv/build-trunk-windows-xp-i686-release/include/soma-io/checker/formatchecker.h:38,
from c:/bv/build-trunk-windows-xp-i686-release/include/soma-io/checker/dicomformatchecker.h:38,
from c:/bv/soma/soma-io/trunk/src/somadicom/checker/dicomformatchecker.cc:35:
c:\msys\1.0\mingw\bin\../lib/gcc/i686-w64-mingw32/4.7.3/../../../../i686-w64-mingw32/include/_mingw_off_t.h:24:17: error: 'off_t' has a previous declaration as 'typedef off64_t off_t'
make[2]: *** [build_files/soma-io/src/somadicom/CMakeFiles/somadicom.dir/checker/dicomformatchecker.cc.obj] Error 1
make[2]: Leaving directory `/c/bv/build-trunk-windows-xp-i686-release'
make[1]: *** [build_files/soma-io/src/somadicom/CMakeFiles/somadicom.dir/all] Error 2
make[1]: Leaving directory `/c/bv/build-trunk-windows-xp-i686-release'
make: *** [all] Error 2
```
|
1.0
|
Window7: somadicom plugin build fail - ---
Author Name: **Souedet, Nicolas** (Souedet, Nicolas)
Original Redmine Issue: 12893, https://bioproj.extra.cea.fr/redmine/issues/12893
Original Date: 2015-07-03
Original Assignee: Souedet, Nicolas
---
```
[ 40%] Building CXX object build_files/soma-io/src/somadicom/CMakeFiles/somadicom.dir/checker/dicomformatchecker.cc.obj
cd /C/bv/build-trunk-windows-xp-i686-release/build_files/soma-io/src/somadicom && /C/msys/1.0/MinGW/bin/g++.exe -Dsomadicom_EXPORTS -DUSE_SHARE_CONFIG -D_REENTRANT -DSOMA_IO_DICOM -DHAVE_CONFIG_H -IC:/msys/1.0/local/regex-2.7/include -O3 -DNDEBUG -I/C/bv/build-trunk-windows-xp-i686-release/include -I/c/msys/1.0/local/libsigc++-2.1.1/include/sigc++-2.0 -I/c/msys/1.0/local/libsigc++-2.1.1/lib/sigc++-2.0/include -I/C/msys/1.0/local/libxml2-2.7.8/include -I/C/msys/1.0/local/boost-1.51.0/include/boost-1_51 -I/C/msys/1.0/local/dcmtk-3.5.4/include -I/C/msys/1.0/local/dcmtk-3.5.4/include/dcmtk -I/C/msys/1.0/local/dcmtk-3.5.4/include/dcmtk/dcmtk/config -I/C/msys/1.0/local/dcmtk-3.5.4/include/dcmtk/dcmdata -I/C/msys/1.0/local/dcmtk-3.5.4/include/dcmtk/dcmnet -I/C/msys/1.0/local/dcmtk-3.5.4/include/dcmtk/dcmtls -I/C/msys/1.0/local/dcmtk-3.5.4/include/dcmtk/dcmimgle -I/C/msys/1.0/local/dcmtk-3.5.4/include/dcmtk/dcmjpeg -I/C/msys/1.0/local/dcmtk-3.5.4/include/dcmtk/ofstd -o CMakeFiles/somadicom.dir/checker/dicomformatchecker.cc.obj -c /C/bv/soma/soma-io/trunk/src/somadicom/checker/dicomformatchecker.cc
In file included from c:/bv/build-trunk-windows-xp-i686-release/include/soma-io/Dicom/DatasetModule.h:8:0,
from c:/bv/build-trunk-windows-xp-i686-release/include/soma-io/Dicom/DicomIO.h:8,
from c:/bv/soma/soma-io/trunk/src/somadicom/checker/dicomformatchecker.cc:36:
c:/bv/build-trunk-windows-xp-i686-release/include/soma-io/Utils/StdInt.h:11:17: error: conflicting declaration 'typedef char int8_t'
In file included from c:\msys\1.0\mingw\bin\../lib/gcc/i686-w64-mingw32/4.7.3/include/stdint.h:3:0,
from c:/bv/build-trunk-windows-xp-i686-release/include/cartobase/config/cartobase_config.h:125,
from c:/bv/build-trunk-windows-xp-i686-release/include/cartobase/object/object.h:37,
from c:/bv/build-trunk-windows-xp-i686-release/include/soma-io/checker/formatchecker.h:38,
from c:/bv/build-trunk-windows-xp-i686-release/include/soma-io/checker/dicomformatchecker.h:38,
from c:/bv/soma/soma-io/trunk/src/somadicom/checker/dicomformatchecker.cc:35:
c:\msys\1.0\mingw\bin\../lib/gcc/i686-w64-mingw32/4.7.3/../../../../i686-w64-mingw32/include/stdint.h:35:21: error: 'int8_t' has a previous declaration as 'typedef signed char int8_t'
In file included from c:/bv/build-trunk-windows-xp-i686-release/include/soma-io/Dicom/DatasetModule.h:8:0,
from c:/bv/build-trunk-windows-xp-i686-release/include/soma-io/Dicom/DicomIO.h:8,
from c:/bv/soma/soma-io/trunk/src/somadicom/checker/dicomformatchecker.cc:36:
c:/bv/build-trunk-windows-xp-i686-release/include/soma-io/Utils/StdInt.h:22:26: error: conflicting declaration 'typedef unsigned int ssize_t'
In file included from c:\msys\1.0\mingw\bin\../lib/gcc/i686-w64-mingw32/4.7.3/../../../../i686-w64-mingw32/include/crtdefs.h:10:0,
from c:\msys\1.0\mingw\bin\../lib/gcc/i686-w64-mingw32/4.7.3/../../../../i686-w64-mingw32/include/stdint.h:28,
from c:\msys\1.0\mingw\bin\../lib/gcc/i686-w64-mingw32/4.7.3/include/stdint.h:3,
from c:/bv/build-trunk-windows-xp-i686-release/include/cartobase/config/cartobase_config.h:125,
from c:/bv/build-trunk-windows-xp-i686-release/include/cartobase/object/object.h:37,
from c:/bv/build-trunk-windows-xp-i686-release/include/soma-io/checker/formatchecker.h:38,
from c:/bv/build-trunk-windows-xp-i686-release/include/soma-io/checker/dicomformatchecker.h:38,
from c:/bv/soma/soma-io/trunk/src/somadicom/checker/dicomformatchecker.cc:35:
c:\msys\1.0\mingw\bin\../lib/gcc/i686-w64-mingw32/4.7.3/../../../../i686-w64-mingw32/include/_mingw.h:389:13: error: 'ssize_t' has a previous declaration as 'typedef int ssize_t'
In file included from c:/bv/build-trunk-windows-xp-i686-release/include/soma-io/Dicom/DatasetModule.h:8:0,
from c:/bv/build-trunk-windows-xp-i686-release/include/soma-io/Dicom/DicomIO.h:8,
from c:/bv/soma/soma-io/trunk/src/somadicom/checker/dicomformatchecker.cc:36:
c:/bv/build-trunk-windows-xp-i686-release/include/soma-io/Utils/StdInt.h:25:14: error: conflicting declaration 'typedef long int off_t'
In file included from c:\msys\1.0\mingw\bin\../lib/gcc/i686-w64-mingw32/4.7.3/../../../../i686-w64-mingw32/include/wchar.h:379:0,
from c:\msys\1.0\mingw\bin\../lib/gcc/i686-w64-mingw32/4.7.3/include/c++/cwchar:46,
from c:\msys\1.0\mingw\bin\../lib/gcc/i686-w64-mingw32/4.7.3/include/c++/bits/postypes.h:42,
from c:\msys\1.0\mingw\bin\../lib/gcc/i686-w64-mingw32/4.7.3/include/c++/iosfwd:42,
from c:\msys\1.0\mingw\bin\../lib/gcc/i686-w64-mingw32/4.7.3/include/c++/ios:39,
from c:\msys\1.0\mingw\bin\../lib/gcc/i686-w64-mingw32/4.7.3/include/c++/ostream:40,
from c:\msys\1.0\mingw\bin\../lib/gcc/i686-w64-mingw32/4.7.3/include/c++/iostream:40,
from c:/bv/build-trunk-windows-xp-i686-release/include/cartobase/smart/rcptr.h:40,
from c:/bv/build-trunk-windows-xp-i686-release/include/cartobase/type/types.h:38,
from c:/bv/build-trunk-windows-xp-i686-release/include/cartobase/object/object.h:38,
from c:/bv/build-trunk-windows-xp-i686-release/include/soma-io/checker/formatchecker.h:38,
from c:/bv/build-trunk-windows-xp-i686-release/include/soma-io/checker/dicomformatchecker.h:38,
from c:/bv/soma/soma-io/trunk/src/somadicom/checker/dicomformatchecker.cc:35:
c:\msys\1.0\mingw\bin\../lib/gcc/i686-w64-mingw32/4.7.3/../../../../i686-w64-mingw32/include/_mingw_off_t.h:24:17: error: 'off_t' has a previous declaration as 'typedef off64_t off_t'
make[2]: *** [build_files/soma-io/src/somadicom/CMakeFiles/somadicom.dir/checker/dicomformatchecker.cc.obj] Error 1
make[2]: Leaving directory `/c/bv/build-trunk-windows-xp-i686-release'
make[1]: *** [build_files/soma-io/src/somadicom/CMakeFiles/somadicom.dir/all] Error 2
make[1]: Leaving directory `/c/bv/build-trunk-windows-xp-i686-release'
make: *** [all] Error 2
```
|
defect
|
somadicom plugin build fail author name souedet nicolas souedet nicolas original redmine issue original date original assignee souedet nicolas building cxx object build files soma io src somadicom cmakefiles somadicom dir checker dicomformatchecker cc obj cd c bv build trunk windows xp release build files soma io src somadicom c msys mingw bin g exe dsomadicom exports duse share config d reentrant dsoma io dicom dhave config h ic msys local regex include dndebug i c bv build trunk windows xp release include i c msys local libsigc include sigc i c msys local libsigc lib sigc include i c msys local include i c msys local boost include boost i c msys local dcmtk include i c msys local dcmtk include dcmtk i c msys local dcmtk include dcmtk dcmtk config i c msys local dcmtk include dcmtk dcmdata i c msys local dcmtk include dcmtk dcmnet i c msys local dcmtk include dcmtk dcmtls i c msys local dcmtk include dcmtk dcmimgle i c msys local dcmtk include dcmtk dcmjpeg i c msys local dcmtk include dcmtk ofstd o cmakefiles somadicom dir checker dicomformatchecker cc obj c c bv soma soma io trunk src somadicom checker dicomformatchecker cc in file included from c bv build trunk windows xp release include soma io dicom datasetmodule h from c bv build trunk windows xp release include soma io dicom dicomio h from c bv soma soma io trunk src somadicom checker dicomformatchecker cc c bv build trunk windows xp release include soma io utils stdint h error conflicting declaration typedef char t in file included from c msys mingw bin lib gcc include stdint h from c bv build trunk windows xp release include cartobase config cartobase config h from c bv build trunk windows xp release include cartobase object object h from c bv build trunk windows xp release include soma io checker formatchecker h from c bv build trunk windows xp release include soma io checker dicomformatchecker h from c bv soma soma io trunk src somadicom checker dicomformatchecker cc c msys mingw bin lib gcc include stdint h error t has a previous declaration as typedef signed char t in file included from c bv build trunk windows xp release include soma io dicom datasetmodule h from c bv build trunk windows xp release include soma io dicom dicomio h from c bv soma soma io trunk src somadicom checker dicomformatchecker cc c bv build trunk windows xp release include soma io utils stdint h error conflicting declaration typedef unsigned int ssize t in file included from c msys mingw bin lib gcc include crtdefs h from c msys mingw bin lib gcc include stdint h from c msys mingw bin lib gcc include stdint h from c bv build trunk windows xp release include cartobase config cartobase config h from c bv build trunk windows xp release include cartobase object object h from c bv build trunk windows xp release include soma io checker formatchecker h from c bv build trunk windows xp release include soma io checker dicomformatchecker h from c bv soma soma io trunk src somadicom checker dicomformatchecker cc c msys mingw bin lib gcc include mingw h error ssize t has a previous declaration as typedef int ssize t in file included from c bv build trunk windows xp release include soma io dicom datasetmodule h from c bv build trunk windows xp release include soma io dicom dicomio h from c bv soma soma io trunk src somadicom checker dicomformatchecker cc c bv build trunk windows xp release include soma io utils stdint h error conflicting declaration typedef long int off t in file included from c msys mingw bin lib gcc include wchar h from c msys mingw bin lib gcc include c cwchar from c msys mingw bin lib gcc include c bits postypes h from c msys mingw bin lib gcc include c iosfwd from c msys mingw bin lib gcc include c ios from c msys mingw bin lib gcc include c ostream from c msys mingw bin lib gcc include c iostream from c bv build trunk windows xp release include cartobase smart rcptr h from c bv build trunk windows xp release include cartobase type types h from c bv build trunk windows xp release include cartobase object object h from c bv build trunk windows xp release include soma io checker formatchecker h from c bv build trunk windows xp release include soma io checker dicomformatchecker h from c bv soma soma io trunk src somadicom checker dicomformatchecker cc c msys mingw bin lib gcc include mingw off t h error off t has a previous declaration as typedef t off t make error make leaving directory c bv build trunk windows xp release make error make leaving directory c bv build trunk windows xp release make error
| 1
|
659,568
| 21,933,569,560
|
IssuesEvent
|
2022-05-23 11:58:24
|
Tangerine-Community/Tangerine
|
https://api.github.com/repos/Tangerine-Community/Tangerine
|
closed
|
Self Evaluation Input Type (Tablet)
|
Education Project Priority feature
|
Child select from splash screen which test to take (listing forms as custom icons and text)
Upon opening the next page they hear an audio recording - Initial Sound/On-Open sound play automatically.
There is a Replay button (an icon of a person speaking) The child can press this replay button no more than X times. If X is reached the button can be greyed or disappear.
This replay button plays the Instructions sound
There are a set of letters/numbers/words/space/special characters floating on the top of the screen.
There is a space for “writing” in the middle of the screen. No individual boxes are displayed empty space that gets filled in during the task
When the child taps a letter from the top letter list, this letter is copied to the middle list but remains on top
TBC - The child can locate a next arrow
The child sees a checkmark beside the empty space for their letters
if response is left empty and they click Next, the child hears the Blank sound prompt “Are you sure you want to skip this one?”]. . This acts a little bit as a warn-if functionality where pressing Next again will let them go
The child sees an eraser under each item or under the last item they have added which allows them to remove this letter. The eraser icon then shifts to the current last item, so that they can remove another entry.
If the child does not interact with the tablet for X seconds we play an Ping/Need Input sound
at the end of the assessment there is an End screen instructing the child to give back the tablet
|
1.0
|
Self Evaluation Input Type (Tablet) - Child select from splash screen which test to take (listing forms as custom icons and text)
Upon opening the next page they hear an audio recording - Initial Sound/On-Open sound play automatically.
There is a Replay button (an icon of a person speaking) The child can press this replay button no more than X times. If X is reached the button can be greyed or disappear.
This replay button plays the Instructions sound
There are a set of letters/numbers/words/space/special characters floating on the top of the screen.
There is a space for “writing” in the middle of the screen. No individual boxes are displayed empty space that gets filled in during the task
When the child taps a letter from the top letter list, this letter is copied to the middle list but remains on top
TBC - The child can locate a next arrow
The child sees a checkmark beside the empty space for their letters
if response is left empty and they click Next, the child hears the Blank sound prompt “Are you sure you want to skip this one?”]. . This acts a little bit as a warn-if functionality where pressing Next again will let them go
The child sees an eraser under each item or under the last item they have added which allows them to remove this letter. The eraser icon then shifts to the current last item, so that they can remove another entry.
If the child does not interact with the tablet for X seconds we play an Ping/Need Input sound
at the end of the assessment there is an End screen instructing the child to give back the tablet
|
non_defect
|
self evaluation input type tablet child select from splash screen which test to take listing forms as custom icons and text upon opening the next page they hear an audio recording initial sound on open sound play automatically there is a replay button an icon of a person speaking the child can press this replay button no more than x times if x is reached the button can be greyed or disappear this replay button plays the instructions sound there are a set of letters numbers words space special characters floating on the top of the screen there is a space for “writing” in the middle of the screen no individual boxes are displayed empty space that gets filled in during the task when the child taps a letter from the top letter list this letter is copied to the middle list but remains on top tbc the child can locate a next arrow the child sees a checkmark beside the empty space for their letters if response is left empty and they click next the child hears the blank sound prompt “are you sure you want to skip this one ” this acts a little bit as a warn if functionality where pressing next again will let them go the child sees an eraser under each item or under the last item they have added which allows them to remove this letter the eraser icon then shifts to the current last item so that they can remove another entry if the child does not interact with the tablet for x seconds we play an ping need input sound at the end of the assessment there is an end screen instructing the child to give back the tablet
| 0
|
554,489
| 16,430,548,644
|
IssuesEvent
|
2021-05-20 00:32:05
|
googleapis/google-cloud-go
|
https://api.github.com/repos/googleapis/google-cloud-go
|
closed
|
pubsublite: TestNewSubscriberCreatesCorrectImpl is an integration test
|
api: pubsublite priority: p2 type: bug
|
This test should be filtered out for short runs as it requires credentials to pass. This was discovered while importing the code.
|
1.0
|
pubsublite: TestNewSubscriberCreatesCorrectImpl is an integration test - This test should be filtered out for short runs as it requires credentials to pass. This was discovered while importing the code.
|
non_defect
|
pubsublite testnewsubscribercreatescorrectimpl is an integration test this test should be filtered out for short runs as it requires credentials to pass this was discovered while importing the code
| 0
|
44,089
| 11,960,973,949
|
IssuesEvent
|
2020-04-05 06:08:12
|
lilydjwg/pssh
|
https://api.github.com/repos/lilydjwg/pssh
|
closed
|
Teardown code in test classes is never called
|
Priority-Medium Type-Defect auto-migrated
|
```
The test classes in test/test.py all use def teardown(self): as the definition
for the teardown functions. The problem is that the unittest API uses tearDown
as the name, so those functions are never called, and the tests leave garbage
folders behind in /tmp.
This commit in my github repository copy of parallel-ssh fixes the issue:
https://github.com/krig/parallel-ssh/commit/597cd3e0886d2b6921a72175eaad8f499e58
a441
```
Original issue reported on code.google.com by `k...@koru.se` on 9 Jan 2014 at 10:29
|
1.0
|
Teardown code in test classes is never called - ```
The test classes in test/test.py all use def teardown(self): as the definition
for the teardown functions. The problem is that the unittest API uses tearDown
as the name, so those functions are never called, and the tests leave garbage
folders behind in /tmp.
This commit in my github repository copy of parallel-ssh fixes the issue:
https://github.com/krig/parallel-ssh/commit/597cd3e0886d2b6921a72175eaad8f499e58
a441
```
Original issue reported on code.google.com by `k...@koru.se` on 9 Jan 2014 at 10:29
|
defect
|
teardown code in test classes is never called the test classes in test test py all use def teardown self as the definition for the teardown functions the problem is that the unittest api uses teardown as the name so those functions are never called and the tests leave garbage folders behind in tmp this commit in my github repository copy of parallel ssh fixes the issue original issue reported on code google com by k koru se on jan at
| 1
|
40,514
| 10,027,086,581
|
IssuesEvent
|
2019-07-17 08:25:50
|
primefaces/primefaces
|
https://api.github.com/repos/primefaces/primefaces
|
closed
|
Calendar: if popup shows date and time it is placed in front of input field
|
defect
|
## 1) Environment
- PrimeFaces version: 7.0
- Does it work on the newest released PrimeFaces version? Version? No, 7.1
- Does it work on the newest sources in GitHub? (Build by source -> https://github.com/primefaces/primefaces/wiki/Building-From-Source) No
- Application server + version: Wildfly 10.1
- Affected browsers: Firefox, Chrome, ...
## 2) Expected behavior
Calendar popup open below or above the input field.
## 3) Actual behavior
If is not enough space below the input field, the popup is placed above the input field. But it seams as if the additional space for the time part is not taken into account.
Because of that the calendar popup is hiding input field.
## 4) Steps to reproduce
1. Got to the showcase, https://www.primefaces.org/showcase/ui/input/calendar.xhtml
1. make the browser window small
1. scroll the 'Datetime' input to the bottom of the view port
1. click into the input
1. the calendar popup opens in front of the input field


|
1.0
|
Calendar: if popup shows date and time it is placed in front of input field - ## 1) Environment
- PrimeFaces version: 7.0
- Does it work on the newest released PrimeFaces version? Version? No, 7.1
- Does it work on the newest sources in GitHub? (Build by source -> https://github.com/primefaces/primefaces/wiki/Building-From-Source) No
- Application server + version: Wildfly 10.1
- Affected browsers: Firefox, Chrome, ...
## 2) Expected behavior
Calendar popup open below or above the input field.
## 3) Actual behavior
If is not enough space below the input field, the popup is placed above the input field. But it seams as if the additional space for the time part is not taken into account.
Because of that the calendar popup is hiding input field.
## 4) Steps to reproduce
1. Got to the showcase, https://www.primefaces.org/showcase/ui/input/calendar.xhtml
1. make the browser window small
1. scroll the 'Datetime' input to the bottom of the view port
1. click into the input
1. the calendar popup opens in front of the input field


|
defect
|
calendar if popup shows date and time it is placed in front of input field environment primefaces version does it work on the newest released primefaces version version no does it work on the newest sources in github build by source no application server version wildfly affected browsers firefox chrome expected behavior calendar popup open below or above the input field actual behavior if is not enough space below the input field the popup is placed above the input field but it seams as if the additional space for the time part is not taken into account because of that the calendar popup is hiding input field steps to reproduce got to the showcase make the browser window small scroll the datetime input to the bottom of the view port click into the input the calendar popup opens in front of the input field
| 1
|
77,283
| 26,894,174,650
|
IssuesEvent
|
2023-02-06 11:07:45
|
line/armeria
|
https://api.github.com/repos/line/armeria
|
closed
|
Defect: HttpTranscodingService fails to find descriptor for nested message fields
|
defect
|
On certain conditions, armeria server fails to start with this error message:
Descriptor for the type 'PROTO_MESSAGE_FULL_NAME' does not exist.
This happens when a nested message is used as a field type from another message
i.e)
```proto
message A {
string name = 1;
B b = 2;
message B {
string name = 1;
}
}
message SomeARequest { // error when getting field b type Descriptor for this message
A.B b = 1;
}
```
The problem is that when finding the descriptor from file (HttpJsonTranscodingService:L333), Descriptors#findMessageTypeByName is used to find the descriptor.
However, this method does not find nested types
``` java
/**
* Find a message type in the file by name. Does not find nested types.
*
* @param name The unqualified type name to look for.
* @return The message type's descriptor, or {@code null} if not found.
*/
public Descriptor findMessageTypeByName(String name)
```
resulting with exception and the server hangs without starting.
|
1.0
|
Defect: HttpTranscodingService fails to find descriptor for nested message fields - On certain conditions, armeria server fails to start with this error message:
Descriptor for the type 'PROTO_MESSAGE_FULL_NAME' does not exist.
This happens when a nested message is used as a field type from another message
i.e)
```proto
message A {
string name = 1;
B b = 2;
message B {
string name = 1;
}
}
message SomeARequest { // error when getting field b type Descriptor for this message
A.B b = 1;
}
```
The problem is that when finding the descriptor from file (HttpJsonTranscodingService:L333), Descriptors#findMessageTypeByName is used to find the descriptor.
However, this method does not find nested types
``` java
/**
* Find a message type in the file by name. Does not find nested types.
*
* @param name The unqualified type name to look for.
* @return The message type's descriptor, or {@code null} if not found.
*/
public Descriptor findMessageTypeByName(String name)
```
resulting with exception and the server hangs without starting.
|
defect
|
defect httptranscodingservice fails to find descriptor for nested message fields on certain conditions armeria server fails to start with this error message descriptor for the type proto message full name does not exist this happens when a nested message is used as a field type from another message i e proto message a string name b b message b string name message somearequest error when getting field b type descriptor for this message a b b the problem is that when finding the descriptor from file httpjsontranscodingservice descriptors findmessagetypebyname is used to find the descriptor however this method does not find nested types java find a message type in the file by name does not find nested types param name the unqualified type name to look for return the message type s descriptor or code null if not found public descriptor findmessagetypebyname string name resulting with exception and the server hangs without starting
| 1
|
14,512
| 3,274,265,624
|
IssuesEvent
|
2015-10-26 09:57:57
|
dart-lang/sdk
|
https://api.github.com/repos/dart-lang/sdk
|
closed
|
Extends clause for class Object is permitted
|
Area-Language Resolution-AsDesigned
|
According to specification (3-rd edition, June 2015) "10.9 Superclasses":
"It is a compile-time error to specify an extends clause for class Object."
However test below is compiled and executed successfully.
```dart
class A { int b = 1; }
class Object extends A {}
main() {
A a = new Object();
Object o = new Object();
print(a.b);
print(o.b);
}
dart test
1
1
```
Tested on Dart VM version: 1.13.0-dev.7.3 (Fri Oct 23 04:39:45 2015) on "linux_x64"
|
1.0
|
Extends clause for class Object is permitted - According to specification (3-rd edition, June 2015) "10.9 Superclasses":
"It is a compile-time error to specify an extends clause for class Object."
However test below is compiled and executed successfully.
```dart
class A { int b = 1; }
class Object extends A {}
main() {
A a = new Object();
Object o = new Object();
print(a.b);
print(o.b);
}
dart test
1
1
```
Tested on Dart VM version: 1.13.0-dev.7.3 (Fri Oct 23 04:39:45 2015) on "linux_x64"
|
non_defect
|
extends clause for class object is permitted according to specification rd edition june superclasses it is a compile time error to specify an extends clause for class object however test below is compiled and executed successfully dart class a int b class object extends a main a a new object object o new object print a b print o b dart test tested on dart vm version dev fri oct on linux
| 0
|
30,322
| 6,106,879,859
|
IssuesEvent
|
2017-06-21 06:18:57
|
opencaching/opencaching-pl
|
https://api.github.com/repos/opencaching/opencaching-pl
|
closed
|
User can't change cachetype to owncache
|
Component_CacheEdit Priority_High Type_Defect
|
From forum (https://forum.opencaching.pl/viewtopic.php?p=134743#p134743):
> Jedyny problem jest taki że mi wyrzuca owna - wraca notorycznie do nietypowej.
>
> Potwierdzam, że jest jakiś problem - próbowałem zmienić losową skrzynkę na own i się nie da. Możliwe, że to jest związane z aktualnymi obostrzeniami tego typu
|
1.0
|
User can't change cachetype to owncache - From forum (https://forum.opencaching.pl/viewtopic.php?p=134743#p134743):
> Jedyny problem jest taki że mi wyrzuca owna - wraca notorycznie do nietypowej.
>
> Potwierdzam, że jest jakiś problem - próbowałem zmienić losową skrzynkę na own i się nie da. Możliwe, że to jest związane z aktualnymi obostrzeniami tego typu
|
defect
|
user can t change cachetype to owncache from forum jedyny problem jest taki że mi wyrzuca owna wraca notorycznie do nietypowej potwierdzam że jest jakiś problem próbowałem zmienić losową skrzynkę na own i się nie da możliwe że to jest związane z aktualnymi obostrzeniami tego typu
| 1
|
17,607
| 9,819,134,391
|
IssuesEvent
|
2019-06-13 21:06:12
|
tensorflow/tensorflow
|
https://api.github.com/repos/tensorflow/tensorflow
|
closed
|
Bug on `gather_nd` with gradient.
|
2.0.0-beta0 comp:eager stat:awaiting tensorflower type:bug/performance
|
<em>Please make sure that this is a bug. As per our [GitHub Policy](https://github.com/tensorflow/tensorflow/blob/master/ISSUES.md), we only address code/doc bugs, performance issues, feature requests and build/installation issues on GitHub. tag:bug_template</em>
**System information**
- Have I written custom code (as opposed to using a stock example script provided in TensorFlow): No
- OS Platform and Distribution (e.g., Linux Ubuntu 16.04): Linux Ubuntu 16.04
- Mobile device (e.g. iPhone 8, Pixel 2, Samsung Galaxy) if the issue happens on mobile device:
- TensorFlow installed from (source or binary): pip
- TensorFlow version (use command below): tf2-gpu-beta
- Python version: 3.6.8
- Bazel version (if compiling from source):
- GCC/Compiler version (if compiling from source):
- CUDA/cuDNN version:
- GPU model and memory:
You can collect some of this information using our environment capture
[script](https://github.com/tensorflow/tensorflow/tree/master/tools/tf_env_collect.sh)
You can also obtain the TensorFlow version with: 1. TF 1.0: `python -c "import
tensorflow as tf; print(tf.GIT_VERSION, tf.VERSION)"` 2. TF 2.0: `python -c
"import tensorflow as tf; print(tf.version.GIT_VERSION, tf.version.VERSION)"`
**Describe the current behavior**
A simple test code
```python
v = tf.Variable(np.random.uniform(size=[2,2]), dtype=tf.float32)
with tf.GradientTape() as tape:
l = tf.gather_nd(v, [[1, 1]])
l = tf.reduce_sum(l)
grads = tape.gradient(l, v)
print(grads)
````
gives following error message
```
---------------------------------------------------------------------------
LookupError Traceback (most recent call last)
<ipython-input-12-28efd3aa3042> in <module>
5 l = tf.reduce_sum(l)
6
----> 7 grads = tape.gradient(l, v)
8 print(grads)
~/anaconda3/envs/tf2/lib/python3.6/site-packages/tensorflow/python/eager/backprop.py in gradient(self, target, sources, output_gradients, unconnected_gradients)
1000 output_gradients=output_gradients,
1001 sources_raw=flat_sources_raw,
-> 1002 unconnected_gradients=unconnected_gradients)
1003
1004 if not self._persistent:
~/anaconda3/envs/tf2/lib/python3.6/site-packages/tensorflow/python/eager/imperative_grad.py in imperative_grad(tape, target, sources, output_gradients, sources_raw, unconnected_gradients)
74 output_gradients,
75 sources_raw,
---> 76 compat.as_str(unconnected_gradients.value))
~/anaconda3/envs/tf2/lib/python3.6/site-packages/tensorflow/python/eager/backprop.py in _gradient_function(op_name, attr_tuple, num_inputs, inputs, outputs, out_grads, skip_input_indices)
131 """
132 mock_op = _MockOp(attr_tuple, inputs, outputs, op_name, skip_input_indices)
--> 133 grad_fn = ops._gradient_registry.lookup(op_name) # pylint: disable=protected-access
134 if grad_fn is None:
135 return [None] * num_inputs
~/anaconda3/envs/tf2/lib/python3.6/site-packages/tensorflow/python/framework/registry.py in lookup(self, name)
95 else:
96 raise LookupError(
---> 97 "%s registry has no entry for: %s" % (self._name, name))
LookupError: gradient registry has no entry for: ResourceGatherNd
```
**Describe the expected behavior**
the grads should be `[[0, 0], [0, 1]]` but error occurs.
**Code to reproduce the issue**
Provide a reproducible test case that is the bare minimum necessary to generate the problem.
**Other info / logs**
Include any logs or source code that would be helpful to diagnose the problem. If including tracebacks, please include the full traceback. Large logs and files should be attached.
|
True
|
Bug on `gather_nd` with gradient. - <em>Please make sure that this is a bug. As per our [GitHub Policy](https://github.com/tensorflow/tensorflow/blob/master/ISSUES.md), we only address code/doc bugs, performance issues, feature requests and build/installation issues on GitHub. tag:bug_template</em>
**System information**
- Have I written custom code (as opposed to using a stock example script provided in TensorFlow): No
- OS Platform and Distribution (e.g., Linux Ubuntu 16.04): Linux Ubuntu 16.04
- Mobile device (e.g. iPhone 8, Pixel 2, Samsung Galaxy) if the issue happens on mobile device:
- TensorFlow installed from (source or binary): pip
- TensorFlow version (use command below): tf2-gpu-beta
- Python version: 3.6.8
- Bazel version (if compiling from source):
- GCC/Compiler version (if compiling from source):
- CUDA/cuDNN version:
- GPU model and memory:
You can collect some of this information using our environment capture
[script](https://github.com/tensorflow/tensorflow/tree/master/tools/tf_env_collect.sh)
You can also obtain the TensorFlow version with: 1. TF 1.0: `python -c "import
tensorflow as tf; print(tf.GIT_VERSION, tf.VERSION)"` 2. TF 2.0: `python -c
"import tensorflow as tf; print(tf.version.GIT_VERSION, tf.version.VERSION)"`
**Describe the current behavior**
A simple test code
```python
v = tf.Variable(np.random.uniform(size=[2,2]), dtype=tf.float32)
with tf.GradientTape() as tape:
l = tf.gather_nd(v, [[1, 1]])
l = tf.reduce_sum(l)
grads = tape.gradient(l, v)
print(grads)
````
gives following error message
```
---------------------------------------------------------------------------
LookupError Traceback (most recent call last)
<ipython-input-12-28efd3aa3042> in <module>
5 l = tf.reduce_sum(l)
6
----> 7 grads = tape.gradient(l, v)
8 print(grads)
~/anaconda3/envs/tf2/lib/python3.6/site-packages/tensorflow/python/eager/backprop.py in gradient(self, target, sources, output_gradients, unconnected_gradients)
1000 output_gradients=output_gradients,
1001 sources_raw=flat_sources_raw,
-> 1002 unconnected_gradients=unconnected_gradients)
1003
1004 if not self._persistent:
~/anaconda3/envs/tf2/lib/python3.6/site-packages/tensorflow/python/eager/imperative_grad.py in imperative_grad(tape, target, sources, output_gradients, sources_raw, unconnected_gradients)
74 output_gradients,
75 sources_raw,
---> 76 compat.as_str(unconnected_gradients.value))
~/anaconda3/envs/tf2/lib/python3.6/site-packages/tensorflow/python/eager/backprop.py in _gradient_function(op_name, attr_tuple, num_inputs, inputs, outputs, out_grads, skip_input_indices)
131 """
132 mock_op = _MockOp(attr_tuple, inputs, outputs, op_name, skip_input_indices)
--> 133 grad_fn = ops._gradient_registry.lookup(op_name) # pylint: disable=protected-access
134 if grad_fn is None:
135 return [None] * num_inputs
~/anaconda3/envs/tf2/lib/python3.6/site-packages/tensorflow/python/framework/registry.py in lookup(self, name)
95 else:
96 raise LookupError(
---> 97 "%s registry has no entry for: %s" % (self._name, name))
LookupError: gradient registry has no entry for: ResourceGatherNd
```
**Describe the expected behavior**
the grads should be `[[0, 0], [0, 1]]` but error occurs.
**Code to reproduce the issue**
Provide a reproducible test case that is the bare minimum necessary to generate the problem.
**Other info / logs**
Include any logs or source code that would be helpful to diagnose the problem. If including tracebacks, please include the full traceback. Large logs and files should be attached.
|
non_defect
|
bug on gather nd with gradient please make sure that this is a bug as per our we only address code doc bugs performance issues feature requests and build installation issues on github tag bug template system information have i written custom code as opposed to using a stock example script provided in tensorflow no os platform and distribution e g linux ubuntu linux ubuntu mobile device e g iphone pixel samsung galaxy if the issue happens on mobile device tensorflow installed from source or binary pip tensorflow version use command below gpu beta python version bazel version if compiling from source gcc compiler version if compiling from source cuda cudnn version gpu model and memory you can collect some of this information using our environment capture you can also obtain the tensorflow version with tf python c import tensorflow as tf print tf git version tf version tf python c import tensorflow as tf print tf version git version tf version version describe the current behavior a simple test code python v tf variable np random uniform size dtype tf with tf gradienttape as tape l tf gather nd v l tf reduce sum l grads tape gradient l v print grads gives following error message lookuperror traceback most recent call last in l tf reduce sum l grads tape gradient l v print grads envs lib site packages tensorflow python eager backprop py in gradient self target sources output gradients unconnected gradients output gradients output gradients sources raw flat sources raw unconnected gradients unconnected gradients if not self persistent envs lib site packages tensorflow python eager imperative grad py in imperative grad tape target sources output gradients sources raw unconnected gradients output gradients sources raw compat as str unconnected gradients value envs lib site packages tensorflow python eager backprop py in gradient function op name attr tuple num inputs inputs outputs out grads skip input indices mock op mockop attr tuple inputs outputs op name skip input indices grad fn ops gradient registry lookup op name pylint disable protected access if grad fn is none return num inputs envs lib site packages tensorflow python framework registry py in lookup self name else raise lookuperror s registry has no entry for s self name name lookuperror gradient registry has no entry for resourcegathernd describe the expected behavior the grads should be but error occurs code to reproduce the issue provide a reproducible test case that is the bare minimum necessary to generate the problem other info logs include any logs or source code that would be helpful to diagnose the problem if including tracebacks please include the full traceback large logs and files should be attached
| 0
|
71,645
| 7,253,927,120
|
IssuesEvent
|
2018-02-16 08:51:09
|
eclipse/microprofile-config
|
https://api.github.com/repos/eclipse/microprofile-config
|
opened
|
Create a test for no defined config value
|
test
|
If:
A value with no default value specified in `@ConfigProperty` and in no config source
When:
Injecting the value using Optional
Then:
value.orElse("Not defined") should give the value "Not defined"
An example code: https://github.com/payara/Payara-Examples/blob/27eda7b9da1eff4dbe2748ad03d0780c26bb1ba8/microprofile/config-injection/src/main/java/fish/payara/examples/microprofile/configinjection/ShowConfigValues.java#L303
|
1.0
|
Create a test for no defined config value - If:
A value with no default value specified in `@ConfigProperty` and in no config source
When:
Injecting the value using Optional
Then:
value.orElse("Not defined") should give the value "Not defined"
An example code: https://github.com/payara/Payara-Examples/blob/27eda7b9da1eff4dbe2748ad03d0780c26bb1ba8/microprofile/config-injection/src/main/java/fish/payara/examples/microprofile/configinjection/ShowConfigValues.java#L303
|
non_defect
|
create a test for no defined config value if a value with no default value specified in configproperty and in no config source when injecting the value using optional then value orelse not defined should give the value not defined an example code
| 0
|
153,727
| 5,902,364,144
|
IssuesEvent
|
2017-05-19 01:01:22
|
minio/minio
|
https://api.github.com/repos/minio/minio
|
closed
|
How can I extend the storage size after deployment?
|
priority: low
|
I have a situation, when I deploy my minio server, I don't know what storage size I need.
How can I extend the storage size after deployment?
## Your Environment
<!--- Include as many relevant details about the environment you experienced the bug in -->
* Version used (`minio version`):
Version: 2017-03-16T21:50:32Z
Release-Tag: RELEASE.2017-03-16T21-50-32Z
Commit-ID: 5311eb22fd681a8cd4a46e2a872d46c2352c64e8
* Server type and version:
Deploy type : 4 minio nodes share 1 drive, 4 minio client are directly deploy on operating system.
* Operating System and version (`uname -a`):
Linux LFG1000644016 3.10.0-514.10.2.el7.x86_64 #1 SMP Fri Mar 3 00:04:05 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux
* Link to your project:
n/a
|
1.0
|
How can I extend the storage size after deployment? - I have a situation, when I deploy my minio server, I don't know what storage size I need.
How can I extend the storage size after deployment?
## Your Environment
<!--- Include as many relevant details about the environment you experienced the bug in -->
* Version used (`minio version`):
Version: 2017-03-16T21:50:32Z
Release-Tag: RELEASE.2017-03-16T21-50-32Z
Commit-ID: 5311eb22fd681a8cd4a46e2a872d46c2352c64e8
* Server type and version:
Deploy type : 4 minio nodes share 1 drive, 4 minio client are directly deploy on operating system.
* Operating System and version (`uname -a`):
Linux LFG1000644016 3.10.0-514.10.2.el7.x86_64 #1 SMP Fri Mar 3 00:04:05 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux
* Link to your project:
n/a
|
non_defect
|
how can i extend the storage size after deployment i have a situation when i deploy my minio server i don t know what storage size i need how can i extend the storage size after deployment your environment version used minio version version release tag release commit id server type and version deploy type minio nodes share drive minio client are directly deploy on operating system operating system and version uname a linux smp fri mar utc gnu linux link to your project n a
| 0
|
40,927
| 10,228,434,021
|
IssuesEvent
|
2019-08-17 02:20:50
|
ascott18/TellMeWhen
|
https://api.github.com/repos/ascott18/TellMeWhen
|
closed
|
[Wrong Timer] Vision of Perfection Wrong Timers On Procs
|
defect more-info-needed
|
For warlocks Vision of Perfection Major Essence (Tested on Rank 3) has a chance to summon your major Guardian (Infernal, Darkglare or Demonic Tyrant) for 35% of its base duration. However when the proc happens the duration on the icon's timer shows 100% uptime (30sec) and not the real 35% (10.5sec) and even when the guardian disappears the timer keeps going to 0 even though there is no guardian present. This issue is present in the Guardians Icon Option.
|
1.0
|
[Wrong Timer] Vision of Perfection Wrong Timers On Procs - For warlocks Vision of Perfection Major Essence (Tested on Rank 3) has a chance to summon your major Guardian (Infernal, Darkglare or Demonic Tyrant) for 35% of its base duration. However when the proc happens the duration on the icon's timer shows 100% uptime (30sec) and not the real 35% (10.5sec) and even when the guardian disappears the timer keeps going to 0 even though there is no guardian present. This issue is present in the Guardians Icon Option.
|
defect
|
vision of perfection wrong timers on procs for warlocks vision of perfection major essence tested on rank has a chance to summon your major guardian infernal darkglare or demonic tyrant for of its base duration however when the proc happens the duration on the icon s timer shows uptime and not the real and even when the guardian disappears the timer keeps going to even though there is no guardian present this issue is present in the guardians icon option
| 1
|
115,142
| 17,273,757,949
|
IssuesEvent
|
2021-07-23 01:04:11
|
brogers588/Java_Demo
|
https://api.github.com/repos/brogers588/Java_Demo
|
opened
|
CVE-2021-35043 (Medium) detected in antisamy-1.5.3.jar
|
security vulnerability
|
## CVE-2021-35043 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>antisamy-1.5.3.jar</b></p></summary>
<p>The OWASP AntiSamy project is a collection of APIs for safely allowing users to supply their own HTML
and CSS without exposing the site to XSS vulnerabilities.</p>
<p>Library home page: <a href="http://www.owasp.org/index.php/Category:OWASP_AntiSamy_Project">http://www.owasp.org/index.php/Category:OWASP_AntiSamy_Project</a></p>
<p>Path to dependency file: Java_Demo/pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/org/owasp/antisamy/antisamy/1.5.3/antisamy-1.5.3.jar</p>
<p>
Dependency Hierarchy:
- esapi-2.1.0.1.jar (Root Library)
- :x: **antisamy-1.5.3.jar** (Vulnerable Library)
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
OWASP AntiSamy before 1.6.4 allows XSS via HTML attributes when using the HTML output serializer (XHTML is not affected). This was demonstrated by a javascript: URL with : as the replacement for the : character.
<p>Publish Date: 2021-07-19
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-35043>CVE-2021-35043</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: N/A
- Attack Complexity: N/A
- Privileges Required: N/A
- User Interaction: N/A
- Scope: N/A
- Impact Metrics:
- Confidentiality Impact: N/A
- Integrity Impact: N/A
- Availability Impact: N/A
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-35043">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-35043</a></p>
<p>Release Date: 2021-07-19</p>
<p>Fix Resolution: org.owasp.antisamy:antisamy:1.6.4</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"org.owasp.antisamy","packageName":"antisamy","packageVersion":"1.5.3","packageFilePaths":["/pom.xml"],"isTransitiveDependency":true,"dependencyTree":"org.owasp.esapi:esapi:2.1.0.1;org.owasp.antisamy:antisamy:1.5.3","isMinimumFixVersionAvailable":true,"minimumFixVersion":"org.owasp.antisamy:antisamy:1.6.4"}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2021-35043","vulnerabilityDetails":"OWASP AntiSamy before 1.6.4 allows XSS via HTML attributes when using the HTML output serializer (XHTML is not affected). This was demonstrated by a javascript: URL with \u0026#00058 as the replacement for the : character.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-35043","cvss3Severity":"medium","cvss3Score":"5.5","cvss3Metrics":{"A":"N/A","AC":"N/A","PR":"N/A","S":"N/A","C":"N/A","UI":"N/A","AV":"N/A","I":"N/A"},"extraData":{}}</REMEDIATE> -->
|
True
|
CVE-2021-35043 (Medium) detected in antisamy-1.5.3.jar - ## CVE-2021-35043 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>antisamy-1.5.3.jar</b></p></summary>
<p>The OWASP AntiSamy project is a collection of APIs for safely allowing users to supply their own HTML
and CSS without exposing the site to XSS vulnerabilities.</p>
<p>Library home page: <a href="http://www.owasp.org/index.php/Category:OWASP_AntiSamy_Project">http://www.owasp.org/index.php/Category:OWASP_AntiSamy_Project</a></p>
<p>Path to dependency file: Java_Demo/pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/org/owasp/antisamy/antisamy/1.5.3/antisamy-1.5.3.jar</p>
<p>
Dependency Hierarchy:
- esapi-2.1.0.1.jar (Root Library)
- :x: **antisamy-1.5.3.jar** (Vulnerable Library)
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
OWASP AntiSamy before 1.6.4 allows XSS via HTML attributes when using the HTML output serializer (XHTML is not affected). This was demonstrated by a javascript: URL with : as the replacement for the : character.
<p>Publish Date: 2021-07-19
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-35043>CVE-2021-35043</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: N/A
- Attack Complexity: N/A
- Privileges Required: N/A
- User Interaction: N/A
- Scope: N/A
- Impact Metrics:
- Confidentiality Impact: N/A
- Integrity Impact: N/A
- Availability Impact: N/A
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-35043">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-35043</a></p>
<p>Release Date: 2021-07-19</p>
<p>Fix Resolution: org.owasp.antisamy:antisamy:1.6.4</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"org.owasp.antisamy","packageName":"antisamy","packageVersion":"1.5.3","packageFilePaths":["/pom.xml"],"isTransitiveDependency":true,"dependencyTree":"org.owasp.esapi:esapi:2.1.0.1;org.owasp.antisamy:antisamy:1.5.3","isMinimumFixVersionAvailable":true,"minimumFixVersion":"org.owasp.antisamy:antisamy:1.6.4"}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2021-35043","vulnerabilityDetails":"OWASP AntiSamy before 1.6.4 allows XSS via HTML attributes when using the HTML output serializer (XHTML is not affected). This was demonstrated by a javascript: URL with \u0026#00058 as the replacement for the : character.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-35043","cvss3Severity":"medium","cvss3Score":"5.5","cvss3Metrics":{"A":"N/A","AC":"N/A","PR":"N/A","S":"N/A","C":"N/A","UI":"N/A","AV":"N/A","I":"N/A"},"extraData":{}}</REMEDIATE> -->
|
non_defect
|
cve medium detected in antisamy jar cve medium severity vulnerability vulnerable library antisamy jar the owasp antisamy project is a collection of apis for safely allowing users to supply their own html and css without exposing the site to xss vulnerabilities library home page a href path to dependency file java demo pom xml path to vulnerable library home wss scanner repository org owasp antisamy antisamy antisamy jar dependency hierarchy esapi jar root library x antisamy jar vulnerable library found in base branch master vulnerability details owasp antisamy before allows xss via html attributes when using the html output serializer xhtml is not affected this was demonstrated by a javascript url with as the replacement for the character publish date url a href cvss score details base score metrics exploitability metrics attack vector n a attack complexity n a privileges required n a user interaction n a scope n a impact metrics confidentiality impact n a integrity impact n a availability impact n a for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution org owasp antisamy antisamy isopenpronvulnerability true ispackagebased true isdefaultbranch true packages istransitivedependency true dependencytree org owasp esapi esapi org owasp antisamy antisamy isminimumfixversionavailable true minimumfixversion org owasp antisamy antisamy basebranches vulnerabilityidentifier cve vulnerabilitydetails owasp antisamy before allows xss via html attributes when using the html output serializer xhtml is not affected this was demonstrated by a javascript url with as the replacement for the character vulnerabilityurl
| 0
|
5,629
| 2,610,192,181
|
IssuesEvent
|
2015-02-26 19:00:43
|
chrsmith/quchuseban
|
https://api.github.com/repos/chrsmith/quchuseban
|
opened
|
转载怎样冶疗脸上黑色斑点
|
auto-migrated Priority-Medium Type-Defect
|
```
《摘要》
哭的时候,我会闭上眼睛不让它流泪;孤独寂寞的时候,我��
�静静的想着某人;伤心的时候,我会找个地方静静的发呆,�
��后告诉自己,还是要面对坚持下去;难过的时候,我会伪装
自己,对别人说:我很好、我很开心;失落的时候,我会笑��
�对自己说,没事的,一切总会过去。黄褐斑最主要的就是不�
��等长了黄褐斑才想要祛斑,那是得不偿失的,劳神费才的事
情。同时在去除黄褐斑的时候你要知道面部黄褐斑是怎么来��
�,这才能有效去除黄褐斑。怎样冶疗脸上黑色斑点,
《客户案例》
我在很小的时候就有黄褐斑了,算是遗传的吧,我妈脸��
�就有斑,在很小的时候老妈可就特别担心我,说一个小女孩�
��斑不好看,怕我心里有什么阴影。也许是我比较马大哈吧,
再加上年龄小那时黄褐斑也少根本就
没在意,大约十几岁的时候知道爱美了,看别的女孩脸上都��
�白白的,就我脸上有黄褐斑,心里就不舒服了
,就偷偷的找那些祛斑偏方,有很多教怎么去黄褐斑的,用��
�很多,也没什么效果。后来工作了,就去买祛斑口服,没想�
��没用两次,肚子就不舒服了,疼了一星期,我可不敢再用了
,为了去掉这些该死的黄褐斑,我还
想过做激光,可我在网上看一个女孩做了激光还留疤了,这��
�太不划算了,经过这么些年的折腾,我也知
道像我这种遗传性黄褐斑是不能彻底去掉的,就想能不能找��
�什么祛斑的淡化一下,后来就找到了「黛芙薇尔精华液」,�
��是在网上问了我的问题,很多网友说这个挺有效果的,我就
忍不住去他们商城上
仔细看了看,看成分都是精华的,对身体应该没什么副作用��
�又咨询了他们专家,说这个是通过精华调理
祛斑的,我感觉应该还不错,就试着订了两个周期,没想到��
�黄褐斑还真的淡化下去了,现在脸白白的,皮
肤也好了很多。
阅读了怎样冶疗脸上黑色斑点,再看脸上容易长斑的原因:
《色斑形成原因》
内部因素
一、压力
当人受到压力时,就会分泌肾上腺素,为对付压力而做��
�备。如果长期受到压力,人体新陈代谢的平衡就会遭到破坏�
��皮肤所需的营养供应趋于缓慢,色素母细胞就会变得很活跃
。
二、荷尔蒙分泌失调
避孕药里所含的女性荷尔蒙雌激素,会刺激麦拉宁细胞��
�分泌而形成不均匀的斑点,因避孕药而形成的斑点,虽然在�
��药中断后会停止,但仍会在皮肤上停留很长一段时间。怀孕
中因女性荷尔蒙雌激素的增加,从怀孕4—5个月开始会容易出
现斑,这时候出现的斑点在产后大部分会消失。可是,新陈��
�谢不正常、肌肤裸露在强烈的紫外线下、精神上受到压力等�
��因,都会使斑加深。有时新长出的斑,产后也不会消失,所
以需要更加注意。
三、新陈代谢缓慢
肝的新陈代谢功能不正常或卵巢功能减退时也会出现斑��
�因为新陈代谢不顺畅、或内分泌失调,使身体处于敏感状态�
��,从而加剧色素问题。我们常说的便秘会形成斑,其实就是
内分泌失调导致过敏体质而形成的。另外,身体状态不正常��
�时候,紫外线的照射也会加速斑的形成。
四、错误的使用化妆品
使用了不适合自己皮肤的化妆品,会导致皮肤过敏。在��
�疗的过程中如过量照射到紫外线,皮肤会为了抵御外界的侵�
��,在有炎症的部位聚集麦拉宁色素,这样会出现色素沉着的
问题。
外部因素
一、紫外线
照射紫外线的时候,人体为了保护皮肤,会在基底层产��
�很多麦拉宁色素。所以为了保护皮肤,会在敏感部位聚集更�
��的色素。经常裸露在强烈的阳光底下不仅促进皮肤的老化,
还会引起黑斑、雀斑等色素沉着的皮肤疾患。
二、不良的清洁习惯
因强烈的清洁习惯使皮肤变得敏感,这样会刺激皮肤。��
�皮肤敏感时,人体为了保护皮肤,黑色素细胞会分泌很多麦�
��宁色素,当色素过剩时就出现了斑、瑕疵等皮肤色素沉着的
问题。
三、遗传基因
父母中有长斑的,则本人长斑的概率就很高,这种情况��
�一定程度上就可判定是遗传基因的作用。所以家里特别是长�
��有长斑的人,要注意避免引发长斑的重要因素之一——紫外
线照射,这是预防斑必须注意的。
《有疑问帮你解决》
1,黛芙薇尔精华液真的有效果吗?真的可以把脸上的黄褐��
�去掉吗?
答:黛芙薇尔精华液DNA精华能够有效的修复周围难以触��
�的色斑,其独有的纳豆成分为皮肤的美白与靓丽,提供了必�
��可少的营养物质,可以有效的去除黄褐斑,黄褐斑,黄褐斑
,蝴蝶斑,晒斑、妊娠斑等。它它完全突破了传统的美肤时��
�,宛如在皮肤中注入了一杯兼具活化、再生、滋养等功效的�
��尾酒,同时为脸部提供大量有机维生素精华,脸部的改变显
而易见。自产品上市以来,老顾客纷纷介绍新顾客,71%的新��
�客都是通过老顾客介绍而来,口碑由此而来!
2,服用黛芙薇尔美白,会伤身体吗?有副作用吗?
答:黛芙薇尔精华液应用了精纯复合配方和领先的分类��
�斑科技,并将“DNA美肤系统”疗法应用到了该产品中,能彻�
��祛除黄褐斑,蝴蝶斑,妊娠斑,晒斑,黄褐斑,老年斑,有
效淡化黄褐斑至接近肤色。黛芙薇尔通过法国、美国、台湾��
�地的专家通力协作,超过10年的研究以全新的DNA肌肤修复技��
�,挑战传统化学护肤理念,不懈追寻发现破译大自然的美丽�
��迹,令每一位爱美的女性都能享受到科技创新所带来的自然
之美。
专为亚洲女性肤质研制,精心呵护女性美丽,多年来,为数��
�百万计的女性解除了黄褐斑困扰。深得广大女性朋友的信赖!
3,去除黄褐斑之后,会反弹吗?
答:很多曾经长了黄褐斑的人士,自从选择了黛芙薇尔��
�白,就一劳永逸。这款祛斑产品是经过数十位权威祛斑专家�
��据斑的形成原因精心研制而成用事实说话,让消费者打分。
树立权威品牌!我们的很多新客户都是老客户介绍而来,请问�
��如果效果不好,会有客户转介绍吗?
4,你们的价格有点贵,能不能便宜一点?
答:如果您使用西药最少需要2000元,煎服的药最少需要3
000元,做手术最少是5000元,而这些毫无疑问,不会对彻底去�
��你的斑点有任何帮助!一分价钱,一份价值,我们现在做的��
�是一个口碑,一个品牌,价钱并不高。如果花这点钱把你的�
��褐斑彻底去除,你还会觉得贵吗?你还会再去花那么多冤枉��
�,不但斑没去掉,还把自己的皮肤弄的越来越糟吗
5,我适合用黛芙薇尔精华液吗?
答:黛芙薇尔适用人群:
1、生理紊乱引起的黄褐斑人群
2、生育引起的妊娠斑人群
3、年纪增长引起的老年斑人群
4、化妆品色素沉积、辐射斑人群
5、长期日照引起的日晒斑人群
6、肌肤暗淡急需美白的人群
《祛斑小方法》
怎样冶疗脸上黑色斑点,同时为您分享祛斑小方法
1、葛根玫瑰茶
原料:葛根5克,玫瑰花2克。红茶1克。
做法:将原料混合后,用沸水冲泡,加盖闷5分钟即成。
2、葛根煲银鱼
原料:葛根50克,小银鱼100克,豆腐100克,食盐2克。
做法:在煮沸的汤水中,加入洗净的葛根、小银鱼、豆腐(��
�小块),先用旺火再改用文火煲35分钟,加食盐调味即成。
以上就是简单介绍了如何美容祛斑,这只能淡化色斑,要想��
�底祛斑还是中药内调好。
```
-----
Original issue reported on code.google.com by `additive...@gmail.com` on 1 Jul 2014 at 5:21
|
1.0
|
转载怎样冶疗脸上黑色斑点 - ```
《摘要》
哭的时候,我会闭上眼睛不让它流泪;孤独寂寞的时候,我��
�静静的想着某人;伤心的时候,我会找个地方静静的发呆,�
��后告诉自己,还是要面对坚持下去;难过的时候,我会伪装
自己,对别人说:我很好、我很开心;失落的时候,我会笑��
�对自己说,没事的,一切总会过去。黄褐斑最主要的就是不�
��等长了黄褐斑才想要祛斑,那是得不偿失的,劳神费才的事
情。同时在去除黄褐斑的时候你要知道面部黄褐斑是怎么来��
�,这才能有效去除黄褐斑。怎样冶疗脸上黑色斑点,
《客户案例》
我在很小的时候就有黄褐斑了,算是遗传的吧,我妈脸��
�就有斑,在很小的时候老妈可就特别担心我,说一个小女孩�
��斑不好看,怕我心里有什么阴影。也许是我比较马大哈吧,
再加上年龄小那时黄褐斑也少根本就
没在意,大约十几岁的时候知道爱美了,看别的女孩脸上都��
�白白的,就我脸上有黄褐斑,心里就不舒服了
,就偷偷的找那些祛斑偏方,有很多教怎么去黄褐斑的,用��
�很多,也没什么效果。后来工作了,就去买祛斑口服,没想�
��没用两次,肚子就不舒服了,疼了一星期,我可不敢再用了
,为了去掉这些该死的黄褐斑,我还
想过做激光,可我在网上看一个女孩做了激光还留疤了,这��
�太不划算了,经过这么些年的折腾,我也知
道像我这种遗传性黄褐斑是不能彻底去掉的,就想能不能找��
�什么祛斑的淡化一下,后来就找到了「黛芙薇尔精华液」,�
��是在网上问了我的问题,很多网友说这个挺有效果的,我就
忍不住去他们商城上
仔细看了看,看成分都是精华的,对身体应该没什么副作用��
�又咨询了他们专家,说这个是通过精华调理
祛斑的,我感觉应该还不错,就试着订了两个周期,没想到��
�黄褐斑还真的淡化下去了,现在脸白白的,皮
肤也好了很多。
阅读了怎样冶疗脸上黑色斑点,再看脸上容易长斑的原因:
《色斑形成原因》
内部因素
一、压力
当人受到压力时,就会分泌肾上腺素,为对付压力而做��
�备。如果长期受到压力,人体新陈代谢的平衡就会遭到破坏�
��皮肤所需的营养供应趋于缓慢,色素母细胞就会变得很活跃
。
二、荷尔蒙分泌失调
避孕药里所含的女性荷尔蒙雌激素,会刺激麦拉宁细胞��
�分泌而形成不均匀的斑点,因避孕药而形成的斑点,虽然在�
��药中断后会停止,但仍会在皮肤上停留很长一段时间。怀孕
中因女性荷尔蒙雌激素的增加,从怀孕4—5个月开始会容易出
现斑,这时候出现的斑点在产后大部分会消失。可是,新陈��
�谢不正常、肌肤裸露在强烈的紫外线下、精神上受到压力等�
��因,都会使斑加深。有时新长出的斑,产后也不会消失,所
以需要更加注意。
三、新陈代谢缓慢
肝的新陈代谢功能不正常或卵巢功能减退时也会出现斑��
�因为新陈代谢不顺畅、或内分泌失调,使身体处于敏感状态�
��,从而加剧色素问题。我们常说的便秘会形成斑,其实就是
内分泌失调导致过敏体质而形成的。另外,身体状态不正常��
�时候,紫外线的照射也会加速斑的形成。
四、错误的使用化妆品
使用了不适合自己皮肤的化妆品,会导致皮肤过敏。在��
�疗的过程中如过量照射到紫外线,皮肤会为了抵御外界的侵�
��,在有炎症的部位聚集麦拉宁色素,这样会出现色素沉着的
问题。
外部因素
一、紫外线
照射紫外线的时候,人体为了保护皮肤,会在基底层产��
�很多麦拉宁色素。所以为了保护皮肤,会在敏感部位聚集更�
��的色素。经常裸露在强烈的阳光底下不仅促进皮肤的老化,
还会引起黑斑、雀斑等色素沉着的皮肤疾患。
二、不良的清洁习惯
因强烈的清洁习惯使皮肤变得敏感,这样会刺激皮肤。��
�皮肤敏感时,人体为了保护皮肤,黑色素细胞会分泌很多麦�
��宁色素,当色素过剩时就出现了斑、瑕疵等皮肤色素沉着的
问题。
三、遗传基因
父母中有长斑的,则本人长斑的概率就很高,这种情况��
�一定程度上就可判定是遗传基因的作用。所以家里特别是长�
��有长斑的人,要注意避免引发长斑的重要因素之一——紫外
线照射,这是预防斑必须注意的。
《有疑问帮你解决》
1,黛芙薇尔精华液真的有效果吗?真的可以把脸上的黄褐��
�去掉吗?
答:黛芙薇尔精华液DNA精华能够有效的修复周围难以触��
�的色斑,其独有的纳豆成分为皮肤的美白与靓丽,提供了必�
��可少的营养物质,可以有效的去除黄褐斑,黄褐斑,黄褐斑
,蝴蝶斑,晒斑、妊娠斑等。它它完全突破了传统的美肤时��
�,宛如在皮肤中注入了一杯兼具活化、再生、滋养等功效的�
��尾酒,同时为脸部提供大量有机维生素精华,脸部的改变显
而易见。自产品上市以来,老顾客纷纷介绍新顾客,71%的新��
�客都是通过老顾客介绍而来,口碑由此而来!
2,服用黛芙薇尔美白,会伤身体吗?有副作用吗?
答:黛芙薇尔精华液应用了精纯复合配方和领先的分类��
�斑科技,并将“DNA美肤系统”疗法应用到了该产品中,能彻�
��祛除黄褐斑,蝴蝶斑,妊娠斑,晒斑,黄褐斑,老年斑,有
效淡化黄褐斑至接近肤色。黛芙薇尔通过法国、美国、台湾��
�地的专家通力协作,超过10年的研究以全新的DNA肌肤修复技��
�,挑战传统化学护肤理念,不懈追寻发现破译大自然的美丽�
��迹,令每一位爱美的女性都能享受到科技创新所带来的自然
之美。
专为亚洲女性肤质研制,精心呵护女性美丽,多年来,为数��
�百万计的女性解除了黄褐斑困扰。深得广大女性朋友的信赖!
3,去除黄褐斑之后,会反弹吗?
答:很多曾经长了黄褐斑的人士,自从选择了黛芙薇尔��
�白,就一劳永逸。这款祛斑产品是经过数十位权威祛斑专家�
��据斑的形成原因精心研制而成用事实说话,让消费者打分。
树立权威品牌!我们的很多新客户都是老客户介绍而来,请问�
��如果效果不好,会有客户转介绍吗?
4,你们的价格有点贵,能不能便宜一点?
答:如果您使用西药最少需要2000元,煎服的药最少需要3
000元,做手术最少是5000元,而这些毫无疑问,不会对彻底去�
��你的斑点有任何帮助!一分价钱,一份价值,我们现在做的��
�是一个口碑,一个品牌,价钱并不高。如果花这点钱把你的�
��褐斑彻底去除,你还会觉得贵吗?你还会再去花那么多冤枉��
�,不但斑没去掉,还把自己的皮肤弄的越来越糟吗
5,我适合用黛芙薇尔精华液吗?
答:黛芙薇尔适用人群:
1、生理紊乱引起的黄褐斑人群
2、生育引起的妊娠斑人群
3、年纪增长引起的老年斑人群
4、化妆品色素沉积、辐射斑人群
5、长期日照引起的日晒斑人群
6、肌肤暗淡急需美白的人群
《祛斑小方法》
怎样冶疗脸上黑色斑点,同时为您分享祛斑小方法
1、葛根玫瑰茶
原料:葛根5克,玫瑰花2克。红茶1克。
做法:将原料混合后,用沸水冲泡,加盖闷5分钟即成。
2、葛根煲银鱼
原料:葛根50克,小银鱼100克,豆腐100克,食盐2克。
做法:在煮沸的汤水中,加入洗净的葛根、小银鱼、豆腐(��
�小块),先用旺火再改用文火煲35分钟,加食盐调味即成。
以上就是简单介绍了如何美容祛斑,这只能淡化色斑,要想��
�底祛斑还是中药内调好。
```
-----
Original issue reported on code.google.com by `additive...@gmail.com` on 1 Jul 2014 at 5:21
|
defect
|
转载怎样冶疗脸上黑色斑点 《摘要》 哭的时候,我会闭上眼睛不让它流泪;孤独寂寞的时候,我�� �静静的想着某人;伤心的时候,我会找个地方静静的发呆,� ��后告诉自己,还是要面对坚持下去;难过的时候,我会伪装 自己,对别人说:我很好、我很开心;失落的时候,我会笑�� �对自己说,没事的,一切总会过去。黄褐斑最主要的就是不� ��等长了黄褐斑才想要祛斑,那是得不偿失的,劳神费才的事 情。同时在去除黄褐斑的时候你要知道面部黄褐斑是怎么来�� �,这才能有效去除黄褐斑。怎样冶疗脸上黑色斑点, 《客户案例》 我在很小的时候就有黄褐斑了,算是遗传的吧,我妈脸�� �就有斑,在很小的时候老妈可就特别担心我,说一个小女孩� ��斑不好看,怕我心里有什么阴影。也许是我比较马大哈吧, 再加上年龄小那时黄褐斑也少根本就 没在意,大约十几岁的时候知道爱美了,看别的女孩脸上都�� �白白的,就我脸上有黄褐斑,心里就不舒服了 ,就偷偷的找那些祛斑偏方,有很多教怎么去黄褐斑的,用�� �很多,也没什么效果。后来工作了,就去买祛斑口服,没想� ��没用两次,肚子就不舒服了,疼了一星期,我可不敢再用了 ,为了去掉这些该死的黄褐斑,我还 想过做激光,可我在网上看一个女孩做了激光还留疤了,这�� �太不划算了,经过这么些年的折腾,我也知 道像我这种遗传性黄褐斑是不能彻底去掉的,就想能不能找�� �什么祛斑的淡化一下,后来就找到了「黛芙薇尔精华液」,� ��是在网上问了我的问题,很多网友说这个挺有效果的,我就 忍不住去他们商城上 仔细看了看,看成分都是精华的,对身体应该没什么副作用�� �又咨询了他们专家,说这个是通过精华调理 祛斑的,我感觉应该还不错,就试着订了两个周期,没想到�� �黄褐斑还真的淡化下去了,现在脸白白的,皮 肤也好了很多。 阅读了怎样冶疗脸上黑色斑点,再看脸上容易长斑的原因: 《色斑形成原因》 内部因素 一、压力 当人受到压力时,就会分泌肾上腺素,为对付压力而做�� �备。如果长期受到压力,人体新陈代谢的平衡就会遭到破坏� ��皮肤所需的营养供应趋于缓慢,色素母细胞就会变得很活跃 。 二、荷尔蒙分泌失调 避孕药里所含的女性荷尔蒙雌激素,会刺激麦拉宁细胞�� �分泌而形成不均匀的斑点,因避孕药而形成的斑点,虽然在� ��药中断后会停止,但仍会在皮肤上停留很长一段时间。怀孕 中因女性荷尔蒙雌激素的增加, — 现斑,这时候出现的斑点在产后大部分会消失。可是,新陈�� �谢不正常、肌肤裸露在强烈的紫外线下、精神上受到压力等� ��因,都会使斑加深。有时新长出的斑,产后也不会消失,所 以需要更加注意。 三、新陈代谢缓慢 肝的新陈代谢功能不正常或卵巢功能减退时也会出现斑�� �因为新陈代谢不顺畅、或内分泌失调,使身体处于敏感状态� ��,从而加剧色素问题。我们常说的便秘会形成斑,其实就是 内分泌失调导致过敏体质而形成的。另外,身体状态不正常�� �时候,紫外线的照射也会加速斑的形成。 四、错误的使用化妆品 使用了不适合自己皮肤的化妆品,会导致皮肤过敏。在�� �疗的过程中如过量照射到紫外线,皮肤会为了抵御外界的侵� ��,在有炎症的部位聚集麦拉宁色素,这样会出现色素沉着的 问题。 外部因素 一、紫外线 照射紫外线的时候,人体为了保护皮肤,会在基底层产�� �很多麦拉宁色素。所以为了保护皮肤,会在敏感部位聚集更� ��的色素。经常裸露在强烈的阳光底下不仅促进皮肤的老化, 还会引起黑斑、雀斑等色素沉着的皮肤疾患。 二、不良的清洁习惯 因强烈的清洁习惯使皮肤变得敏感,这样会刺激皮肤。�� �皮肤敏感时,人体为了保护皮肤,黑色素细胞会分泌很多麦� ��宁色素,当色素过剩时就出现了斑、瑕疵等皮肤色素沉着的 问题。 三、遗传基因 父母中有长斑的,则本人长斑的概率就很高,这种情况�� �一定程度上就可判定是遗传基因的作用。所以家里特别是长� ��有长斑的人,要注意避免引发长斑的重要因素之一——紫外 线照射,这是预防斑必须注意的。 《有疑问帮你解决》 黛芙薇尔精华液真的有效果吗 真的可以把脸上的黄褐�� �去掉吗 答:黛芙薇尔精华液dna精华能够有效的修复周围难以触�� �的色斑,其独有的纳豆成分为皮肤的美白与靓丽,提供了必� ��可少的营养物质,可以有效的去除黄褐斑,黄褐斑,黄褐斑 ,蝴蝶斑,晒斑、妊娠斑等。它它完全突破了传统的美肤时�� �,宛如在皮肤中注入了一杯兼具活化、再生、滋养等功效的� ��尾酒,同时为脸部提供大量有机维生素精华,脸部的改变显 而易见。自产品上市以来,老顾客纷纷介绍新顾客, 的新�� �客都是通过老顾客介绍而来,口碑由此而来 ,服用黛芙薇尔美白,会伤身体吗 有副作用吗 答:黛芙薇尔精华液应用了精纯复合配方和领先的分类�� �斑科技,并将“dna美肤系统”疗法应用到了该产品中,能彻� ��祛除黄褐斑,蝴蝶斑,妊娠斑,晒斑,黄褐斑,老年斑,有 效淡化黄褐斑至接近肤色。黛芙薇尔通过法国、美国、台湾�� �地的专家通力协作, �� �,挑战传统化学护肤理念,不懈追寻发现破译大自然的美丽� ��迹,令每一位爱美的女性都能享受到科技创新所带来的自然 之美。 专为亚洲女性肤质研制,精心呵护女性美丽,多年来,为数�� �百万计的女性解除了黄褐斑困扰。深得广大女性朋友的信赖 ,去除黄褐斑之后,会反弹吗 答:很多曾经长了黄褐斑的人士,自从选择了黛芙薇尔�� �白,就一劳永逸。这款祛斑产品是经过数十位权威祛斑专家� ��据斑的形成原因精心研制而成用事实说话,让消费者打分。 树立权威品牌 我们的很多新客户都是老客户介绍而来,请问� ��如果效果不好,会有客户转介绍吗 ,你们的价格有点贵,能不能便宜一点 答: , , ,而这些毫无疑问,不会对彻底去� ��你的斑点有任何帮助 一分价钱,一份价值,我们现在做的�� �是一个口碑,一个品牌,价钱并不高。如果花这点钱把你的� ��褐斑彻底去除,你还会觉得贵吗 你还会再去花那么多冤枉�� �,不但斑没去掉,还把自己的皮肤弄的越来越糟吗 ,我适合用黛芙薇尔精华液吗 答:黛芙薇尔适用人群: 、生理紊乱引起的黄褐斑人群 、生育引起的妊娠斑人群 、年纪增长引起的老年斑人群 、化妆品色素沉积、辐射斑人群 、长期日照引起的日晒斑人群 、肌肤暗淡急需美白的人群 《祛斑小方法》 怎样冶疗脸上黑色斑点,同时为您分享祛斑小方法 、葛根玫瑰茶 原料: , 。 。 做法:将原料混合后,用沸水冲泡, 。 、葛根煲银鱼 原料: , , , 。 做法:在煮沸的汤水中,加入洗净的葛根、小银鱼、豆腐(�� �小块), ,加食盐调味即成。 以上就是简单介绍了如何美容祛斑,这只能淡化色斑,要想�� �底祛斑还是中药内调好。 original issue reported on code google com by additive gmail com on jul at
| 1
|
61,189
| 17,023,629,744
|
IssuesEvent
|
2021-07-03 03:00:42
|
tomhughes/trac-tickets
|
https://api.github.com/repos/tomhughes/trac-tickets
|
closed
|
multipolygons in potlatch
|
Component: potlatch (flash editor) Priority: major Resolution: invalid Type: defect
|
**[Submitted to the original trac issue database at 9.46am, Sunday, 5th September 2010]**
I noticed that potlatch doesn't display multipolygons correctly: it requires the tags for the relation (e.g. building=yes) to be attached to the outer way, which is clearly false. It doesn't interpret (display) them right when set inside the relation.
|
1.0
|
multipolygons in potlatch - **[Submitted to the original trac issue database at 9.46am, Sunday, 5th September 2010]**
I noticed that potlatch doesn't display multipolygons correctly: it requires the tags for the relation (e.g. building=yes) to be attached to the outer way, which is clearly false. It doesn't interpret (display) them right when set inside the relation.
|
defect
|
multipolygons in potlatch i noticed that potlatch doesn t display multipolygons correctly it requires the tags for the relation e g building yes to be attached to the outer way which is clearly false it doesn t interpret display them right when set inside the relation
| 1
|
280,016
| 8,677,003,217
|
IssuesEvent
|
2018-11-30 15:38:38
|
wix/wix-ui
|
https://api.github.com/repos/wix/wix-ui
|
reopened
|
AutoDocs - TypeError when prop has defaultProp but not propTypes
|
Low Priority bug
|
I am not sure if it is a valid React thing to do.
But I had a component with :
```
Component.defaultProps = {
children: <div/>
}
Component.proptypes = {
someOtherProp: bool
// children has no prop type
}
```
AutoDocs gets a TypeError on this line:
https://github.com/wix/wix-ui/blob/16c7e017ac4bf2e77b96a3f678cadcdca0169591/packages/wix-storybook-utils/src/AutoExample/index.js#L287
`type` is undefined.
I think AutoDocs should throw an error when parsing the file, when it creates a parsedProp that doesn't have a type. And give an informational error.
Is there a way to recover from this error, (if this is a valid situation)?
|
1.0
|
AutoDocs - TypeError when prop has defaultProp but not propTypes - I am not sure if it is a valid React thing to do.
But I had a component with :
```
Component.defaultProps = {
children: <div/>
}
Component.proptypes = {
someOtherProp: bool
// children has no prop type
}
```
AutoDocs gets a TypeError on this line:
https://github.com/wix/wix-ui/blob/16c7e017ac4bf2e77b96a3f678cadcdca0169591/packages/wix-storybook-utils/src/AutoExample/index.js#L287
`type` is undefined.
I think AutoDocs should throw an error when parsing the file, when it creates a parsedProp that doesn't have a type. And give an informational error.
Is there a way to recover from this error, (if this is a valid situation)?
|
non_defect
|
autodocs typeerror when prop has defaultprop but not proptypes i am not sure if it is a valid react thing to do but i had a component with component defaultprops children component proptypes someotherprop bool children has no prop type autodocs gets a typeerror on this line type is undefined i think autodocs should throw an error when parsing the file when it creates a parsedprop that doesn t have a type and give an informational error is there a way to recover from this error if this is a valid situation
| 0
|
302,527
| 9,275,804,488
|
IssuesEvent
|
2019-03-20 00:01:58
|
all-rit/AccessibilityLab1
|
https://api.github.com/repos/all-rit/AccessibilityLab1
|
closed
|
Code Editor Tweaks
|
Priority: High Status: Completed System: Client System: Server Type: Enhancement
|
Currently, the code editor is quite lacking in functionality.
---
Plans:
- [x] Add a new `.css` file where users can tweak the hint box's background color for correct and incorrect answers.
- [x] For the file `Hintbox.js`, remove the first line (if statement) and focus on two inputs (messages for correct and incorrect answers).
- [x] Create a new table `AudioCue_SourceCode` for user inputs in the database.
- [x] Create a new endpoint for storing data in the newly created table.
- [x] Communicate with the new endpoint in the client.
- [x] Comments in the CSS file
- [x] Add more information on what each file is for
|
1.0
|
Code Editor Tweaks - Currently, the code editor is quite lacking in functionality.
---
Plans:
- [x] Add a new `.css` file where users can tweak the hint box's background color for correct and incorrect answers.
- [x] For the file `Hintbox.js`, remove the first line (if statement) and focus on two inputs (messages for correct and incorrect answers).
- [x] Create a new table `AudioCue_SourceCode` for user inputs in the database.
- [x] Create a new endpoint for storing data in the newly created table.
- [x] Communicate with the new endpoint in the client.
- [x] Comments in the CSS file
- [x] Add more information on what each file is for
|
non_defect
|
code editor tweaks currently the code editor is quite lacking in functionality plans add a new css file where users can tweak the hint box s background color for correct and incorrect answers for the file hintbox js remove the first line if statement and focus on two inputs messages for correct and incorrect answers create a new table audiocue sourcecode for user inputs in the database create a new endpoint for storing data in the newly created table communicate with the new endpoint in the client comments in the css file add more information on what each file is for
| 0
|
52,003
| 13,211,358,998
|
IssuesEvent
|
2020-08-15 22:33:47
|
icecube-trac/tix4
|
https://api.github.com/repos/icecube-trac/tix4
|
opened
|
[dataclasses] mctree old->new conversion error (Trac #1459)
|
Incomplete Migration Migrated from Trac combo core defect
|
<details>
<summary><em>Migrated from <a href="https://code.icecube.wisc.edu/projects/icecube/ticket/1459">https://code.icecube.wisc.edu/projects/icecube/ticket/1459</a>, reported by david.schultzand owned by olivas</em></summary>
<p>
```json
{
"status": "closed",
"changetime": "2019-02-13T14:11:57",
"_ts": "1550067117911749",
"description": "For file: `/data/ana/Cscd/StartingEvents/NuGen/NuMu/2011_domeff_099/l3/9/l3_00008635.i3.bz2`\n\n{{{\nFATAL (Tree): Assertion failed: insertResult.second (I3MCTree_impl.h:475 in void TreeBase::Tree<T, Key, Hash>::append_child(const Key&, const T&) [with T = I3Particle, Key = I3ParticleID, Hash = __gnu_cxx::hash<I3ParticleID>])\nERROR (PythonFunction): Error running python function as module: (PythonFunction.cxx:173 in virtual void PythonFunction::Process())\nERROR (I3Module): CollectStats: Exception thrown (I3Module.cxx:113 in void I3Module::Do(void (I3Module::*)()))\nTraceback (most recent call last):\n File \"track_veto.py\", line 333, in <module>\n tray.Execute()\n File \"/data/user/nwandkowsky/tarballs/icerec.V04-11-02/build/lib/I3Tray.py\", line 234, in Execute\n super(I3Tray, self).Execute()\n File \"track_veto.py\", line 248, in collectStats\n print frame[\"I3MCTree\"]\n}}}\n\nI'm investigating now.",
"reporter": "david.schultz",
"cc": "nwandkowsky",
"resolution": "fixed",
"time": "2015-12-01T17:50:07",
"component": "combo core",
"summary": "[dataclasses] mctree old->new conversion error",
"priority": "blocker",
"keywords": "mctree",
"milestone": "",
"owner": "olivas",
"type": "defect"
}
```
</p>
</details>
|
1.0
|
[dataclasses] mctree old->new conversion error (Trac #1459) - <details>
<summary><em>Migrated from <a href="https://code.icecube.wisc.edu/projects/icecube/ticket/1459">https://code.icecube.wisc.edu/projects/icecube/ticket/1459</a>, reported by david.schultzand owned by olivas</em></summary>
<p>
```json
{
"status": "closed",
"changetime": "2019-02-13T14:11:57",
"_ts": "1550067117911749",
"description": "For file: `/data/ana/Cscd/StartingEvents/NuGen/NuMu/2011_domeff_099/l3/9/l3_00008635.i3.bz2`\n\n{{{\nFATAL (Tree): Assertion failed: insertResult.second (I3MCTree_impl.h:475 in void TreeBase::Tree<T, Key, Hash>::append_child(const Key&, const T&) [with T = I3Particle, Key = I3ParticleID, Hash = __gnu_cxx::hash<I3ParticleID>])\nERROR (PythonFunction): Error running python function as module: (PythonFunction.cxx:173 in virtual void PythonFunction::Process())\nERROR (I3Module): CollectStats: Exception thrown (I3Module.cxx:113 in void I3Module::Do(void (I3Module::*)()))\nTraceback (most recent call last):\n File \"track_veto.py\", line 333, in <module>\n tray.Execute()\n File \"/data/user/nwandkowsky/tarballs/icerec.V04-11-02/build/lib/I3Tray.py\", line 234, in Execute\n super(I3Tray, self).Execute()\n File \"track_veto.py\", line 248, in collectStats\n print frame[\"I3MCTree\"]\n}}}\n\nI'm investigating now.",
"reporter": "david.schultz",
"cc": "nwandkowsky",
"resolution": "fixed",
"time": "2015-12-01T17:50:07",
"component": "combo core",
"summary": "[dataclasses] mctree old->new conversion error",
"priority": "blocker",
"keywords": "mctree",
"milestone": "",
"owner": "olivas",
"type": "defect"
}
```
</p>
</details>
|
defect
|
mctree old new conversion error trac migrated from json status closed changetime ts description for file data ana cscd startingevents nugen numu domeff n n nfatal tree assertion failed insertresult second impl h in void treebase tree append child const key const t nerror pythonfunction error running python function as module pythonfunction cxx in virtual void pythonfunction process nerror collectstats exception thrown cxx in void do void ntraceback most recent call last n file track veto py line in n tray execute n file data user nwandkowsky tarballs icerec build lib py line in execute n super self execute n file track veto py line in collectstats n print frame n n ni m investigating now reporter david schultz cc nwandkowsky resolution fixed time component combo core summary mctree old new conversion error priority blocker keywords mctree milestone owner olivas type defect
| 1
|
5,687
| 29,924,867,780
|
IssuesEvent
|
2023-06-22 04:03:56
|
OpenLightingProject/ola
|
https://api.github.com/repos/OpenLightingProject/ola
|
opened
|
New protoc versioning breaks configure
|
bug OpSys-OSX Maintainability
|
```
checking for protoc... /usr/local/bin/protoc
+ test -z /usr/local/bin/protoc
+ test -n 2.3.0
+ printf '%s\n' 'configure:25925: checking protoc version'
+ printf %s 'checking protoc version... '
++ /usr/local/bin/protoc --version
++ grep libprotoc
++ sed 's/.*\([0-9][0-9]*\.[0-9][0-9]*\.[0-9][0-9]*\).*/\1/g'
+ protoc_version='libprotoc 23.3'
+ required=2.3.0
++ echo 2.3.0
++ sed 's/[^0-9].*//'
+ required_major=2
++ echo 2.3.0
++ sed 's/[0-9][0-9]*\.\([0-9][0-9]*\)\.[0-9][0-9]*/\1/'
+ required_minor=3
++ echo 2.3.0
++ sed 's/^.*[^0-9]//'
+ required_patch=0
++ echo libprotoc 23.3
++ sed 's/[^0-9].*//'
+ actual_major=
++ echo libprotoc 23.3
++ sed 's/[0-9][0-9]*\.\([0-9][0-9]*\)\.[0-9][0-9]*/\1/'
+ actual_minor='libprotoc 23.3'
++ echo libprotoc 23.3
++ sed 's/^.*[^0-9]//'
+ actual_patch=3
++ expr '>' 2 '|' = 2 '&' libprotoc 23.3 '>' 3 '|' = 2 '&' libprotoc 23.3 = 3 '&' 3 '>=' 0
expr: syntax error
+ protoc_version_proper=
+ test '' = 1
+ as_fn_error 1 'protoc version too old libprotoc 23.3 < 2.3.0' 25948 5
+ as_status=1
+ test 1 -eq 0
+ test 5
+ as_lineno=25948
+ as_lineno_stack=as_lineno_stack=
+ printf '%s\n' 'configure:25948: error: protoc version too old libprotoc 23.3 < 2.3.0'
+ printf '%s\n' 'configure: error: protoc version too old libprotoc 23.3 < 2.3.0'
configure: error: protoc version too old libprotoc 23.3 < 2.3.0
```
See https://github.com/OpenLightingProject/ola/actions/runs/5341378456/jobs/9682134019
https://protobuf.dev/support/version-support/
|
True
|
New protoc versioning breaks configure - ```
checking for protoc... /usr/local/bin/protoc
+ test -z /usr/local/bin/protoc
+ test -n 2.3.0
+ printf '%s\n' 'configure:25925: checking protoc version'
+ printf %s 'checking protoc version... '
++ /usr/local/bin/protoc --version
++ grep libprotoc
++ sed 's/.*\([0-9][0-9]*\.[0-9][0-9]*\.[0-9][0-9]*\).*/\1/g'
+ protoc_version='libprotoc 23.3'
+ required=2.3.0
++ echo 2.3.0
++ sed 's/[^0-9].*//'
+ required_major=2
++ echo 2.3.0
++ sed 's/[0-9][0-9]*\.\([0-9][0-9]*\)\.[0-9][0-9]*/\1/'
+ required_minor=3
++ echo 2.3.0
++ sed 's/^.*[^0-9]//'
+ required_patch=0
++ echo libprotoc 23.3
++ sed 's/[^0-9].*//'
+ actual_major=
++ echo libprotoc 23.3
++ sed 's/[0-9][0-9]*\.\([0-9][0-9]*\)\.[0-9][0-9]*/\1/'
+ actual_minor='libprotoc 23.3'
++ echo libprotoc 23.3
++ sed 's/^.*[^0-9]//'
+ actual_patch=3
++ expr '>' 2 '|' = 2 '&' libprotoc 23.3 '>' 3 '|' = 2 '&' libprotoc 23.3 = 3 '&' 3 '>=' 0
expr: syntax error
+ protoc_version_proper=
+ test '' = 1
+ as_fn_error 1 'protoc version too old libprotoc 23.3 < 2.3.0' 25948 5
+ as_status=1
+ test 1 -eq 0
+ test 5
+ as_lineno=25948
+ as_lineno_stack=as_lineno_stack=
+ printf '%s\n' 'configure:25948: error: protoc version too old libprotoc 23.3 < 2.3.0'
+ printf '%s\n' 'configure: error: protoc version too old libprotoc 23.3 < 2.3.0'
configure: error: protoc version too old libprotoc 23.3 < 2.3.0
```
See https://github.com/OpenLightingProject/ola/actions/runs/5341378456/jobs/9682134019
https://protobuf.dev/support/version-support/
|
non_defect
|
new protoc versioning breaks configure checking for protoc usr local bin protoc test z usr local bin protoc test n printf s n configure checking protoc version printf s checking protoc version usr local bin protoc version grep libprotoc sed s g protoc version libprotoc required echo sed s required major echo sed s required minor echo sed s required patch echo libprotoc sed s actual major echo libprotoc sed s actual minor libprotoc echo libprotoc sed s actual patch expr libprotoc libprotoc expr syntax error protoc version proper test as fn error protoc version too old libprotoc as status test eq test as lineno as lineno stack as lineno stack printf s n configure error protoc version too old libprotoc printf s n configure error protoc version too old libprotoc configure error protoc version too old libprotoc see
| 0
|
36,534
| 7,978,960,378
|
IssuesEvent
|
2018-07-17 20:04:35
|
aleofreddi/svgpan
|
https://api.github.com/repos/aleofreddi/svgpan
|
closed
|
svg not following css width and height values properly under linux
|
Priority-Medium Type-Defect
|
```
What steps will reproduce the problem?
1. Open http://www.sitepoint.com/examples/svg/thewall.html in Chrome
17.0.963.79 on Ubuntu
What is the expected output? What do you see instead?
Expected: The .svg overlay should expand to fill the screen
Actual: The .svg fills the screen vertically, but remains square.
What version of the product are you using? On what operating system?
Chrome 17.0.963.79 on Pinguy Eee, based on Ubuntu 10.04
Please provide any additional information below.
This site seems to work under Chrome for Windows.
```
Original issue reported on code.google.com by `Steve.Le...@gmail.com` on 21 Mar 2012 at 4:56
|
1.0
|
svg not following css width and height values properly under linux - ```
What steps will reproduce the problem?
1. Open http://www.sitepoint.com/examples/svg/thewall.html in Chrome
17.0.963.79 on Ubuntu
What is the expected output? What do you see instead?
Expected: The .svg overlay should expand to fill the screen
Actual: The .svg fills the screen vertically, but remains square.
What version of the product are you using? On what operating system?
Chrome 17.0.963.79 on Pinguy Eee, based on Ubuntu 10.04
Please provide any additional information below.
This site seems to work under Chrome for Windows.
```
Original issue reported on code.google.com by `Steve.Le...@gmail.com` on 21 Mar 2012 at 4:56
|
defect
|
svg not following css width and height values properly under linux what steps will reproduce the problem open in chrome on ubuntu what is the expected output what do you see instead expected the svg overlay should expand to fill the screen actual the svg fills the screen vertically but remains square what version of the product are you using on what operating system chrome on pinguy eee based on ubuntu please provide any additional information below this site seems to work under chrome for windows original issue reported on code google com by steve le gmail com on mar at
| 1
|
28,074
| 5,183,540,154
|
IssuesEvent
|
2017-01-20 01:17:19
|
TNGSB/eWallet
|
https://api.github.com/repos/TNGSB/eWallet
|
closed
|
e-Wallet_Mobile App (OTP Request - System allowed expired OTP) 05012017 #13
|
Defect - High (Sev-2) Live Environment
|
Test Case Description :
1. To verify the input of expired OTP - exceed 3 minutes after receiving the OTP
(for all functionality. example : forgot password, registration, forgot PIN)
Defect Description : When user requested the 1st OTP, OTP was send to user's mobile successfully. After 3 minutes, user requested 2nd OTP. After successfully received the 2nd OTP, user to input the 1st OTP (which supposedly expired), to proceed with action. However, system allowed user to proceed with the 1st OTP.
Defect description 2 : Another defect that user found is,user are allowed to click "Click to received OTP" even though user has requested for the 1st OTP and click back button, and click again to request 2nd OTP. By right the "Click to receive OTP" should not clickable as user has requested 1st OTP previously.
<this scenario are performed within 3 minutes>
Tested with : IOS build 11 & Android build 13
UserID : Annemohd17
Phone Model : Iphone 6 & Samsung Note 5
|
1.0
|
e-Wallet_Mobile App (OTP Request - System allowed expired OTP) 05012017 #13 - Test Case Description :
1. To verify the input of expired OTP - exceed 3 minutes after receiving the OTP
(for all functionality. example : forgot password, registration, forgot PIN)
Defect Description : When user requested the 1st OTP, OTP was send to user's mobile successfully. After 3 minutes, user requested 2nd OTP. After successfully received the 2nd OTP, user to input the 1st OTP (which supposedly expired), to proceed with action. However, system allowed user to proceed with the 1st OTP.
Defect description 2 : Another defect that user found is,user are allowed to click "Click to received OTP" even though user has requested for the 1st OTP and click back button, and click again to request 2nd OTP. By right the "Click to receive OTP" should not clickable as user has requested 1st OTP previously.
<this scenario are performed within 3 minutes>
Tested with : IOS build 11 & Android build 13
UserID : Annemohd17
Phone Model : Iphone 6 & Samsung Note 5
|
defect
|
e wallet mobile app otp request system allowed expired otp test case description to verify the input of expired otp exceed minutes after receiving the otp for all functionality example forgot password registration forgot pin defect description when user requested the otp otp was send to user s mobile successfully after minutes user requested otp after successfully received the otp user to input the otp which supposedly expired to proceed with action however system allowed user to proceed with the otp defect description another defect that user found is user are allowed to click click to received otp even though user has requested for the otp and click back button and click again to request otp by right the click to receive otp should not clickable as user has requested otp previously tested with ios build android build userid phone model iphone samsung note
| 1
|
796,573
| 28,118,680,083
|
IssuesEvent
|
2023-03-31 12:46:22
|
status-im/status-desktop
|
https://api.github.com/repos/status-im/status-desktop
|
opened
|
AC: clicking on a notification no longer jumps to the message
|
bug Activity_center priority F2: important E:ActivityCenter messenger-team
|
# Bug Report
Seems like a recent regression from when we switched to the flat chat model; the error is:
```
ERR 2023-03-31 14:44:53.740+02:00 chat-view unexisting item id: topics="chat-section-module" tid=159075 file=module.nim:440 itemId=0x03073514d4c14a7d10ae9fc9b0f05abc904d84166a6ac80add58bf6a3542a4e50a86bf12e1-35e0-460b-96af-d80020f82050 methodName=activeItemSet
ERR 2023-03-31 14:44:53.740+02:00 chat-view unexisting item id: topics="chat-section-module" tid=159075 file=module.nim:440 itemId=0x03073514d4c14a7d10ae9fc9b0f05abc904d84166a6ac80add58bf6a3542a4e50a86bf12e1-35e0-460b-96af-d80020f82050 methodName=activeItemSet
DBG 2023-03-31 14:44:53.745+02:00 NewBE_callPrivateRPC topics="rpc" tid=159075 file=core.nim:27 rpc_method=wakuext_markAsSeenActivityCenterNotifications
DBG 2023-03-31 14:44:53.750+02:00 NewBE_callPrivateRPC topics="rpc" tid=159075 file=core.nim:27 rpc_method=wakuext_hasUnseenActivityCenterNotifications
```
### Additional Information
- Status desktop version: master
- Operating System: linux
|
1.0
|
AC: clicking on a notification no longer jumps to the message - # Bug Report
Seems like a recent regression from when we switched to the flat chat model; the error is:
```
ERR 2023-03-31 14:44:53.740+02:00 chat-view unexisting item id: topics="chat-section-module" tid=159075 file=module.nim:440 itemId=0x03073514d4c14a7d10ae9fc9b0f05abc904d84166a6ac80add58bf6a3542a4e50a86bf12e1-35e0-460b-96af-d80020f82050 methodName=activeItemSet
ERR 2023-03-31 14:44:53.740+02:00 chat-view unexisting item id: topics="chat-section-module" tid=159075 file=module.nim:440 itemId=0x03073514d4c14a7d10ae9fc9b0f05abc904d84166a6ac80add58bf6a3542a4e50a86bf12e1-35e0-460b-96af-d80020f82050 methodName=activeItemSet
DBG 2023-03-31 14:44:53.745+02:00 NewBE_callPrivateRPC topics="rpc" tid=159075 file=core.nim:27 rpc_method=wakuext_markAsSeenActivityCenterNotifications
DBG 2023-03-31 14:44:53.750+02:00 NewBE_callPrivateRPC topics="rpc" tid=159075 file=core.nim:27 rpc_method=wakuext_hasUnseenActivityCenterNotifications
```
### Additional Information
- Status desktop version: master
- Operating System: linux
|
non_defect
|
ac clicking on a notification no longer jumps to the message bug report seems like a recent regression from when we switched to the flat chat model the error is err chat view unexisting item id topics chat section module tid file module nim itemid methodname activeitemset err chat view unexisting item id topics chat section module tid file module nim itemid methodname activeitemset dbg newbe callprivaterpc topics rpc tid file core nim rpc method wakuext markasseenactivitycenternotifications dbg newbe callprivaterpc topics rpc tid file core nim rpc method wakuext hasunseenactivitycenternotifications additional information status desktop version master operating system linux
| 0
|
137,174
| 12,747,104,284
|
IssuesEvent
|
2020-06-26 17:13:49
|
Varga-CodeAnon/SylvidresTournament
|
https://api.github.com/repos/Varga-CodeAnon/SylvidresTournament
|
opened
|
LICENSE Bandeau site
|
documentation
|
- [ ] Penser à bien documenter l'image à l'aide d'un fichier license potable
|
1.0
|
LICENSE Bandeau site - - [ ] Penser à bien documenter l'image à l'aide d'un fichier license potable
|
non_defect
|
license bandeau site penser à bien documenter l image à l aide d un fichier license potable
| 0
|
127,239
| 17,201,898,665
|
IssuesEvent
|
2021-07-17 12:15:33
|
POSSF/POSSF
|
https://api.github.com/repos/POSSF/POSSF
|
closed
|
Team Page: style need
|
design
|
سلام صفحه تیم الان استایل نداره. باید مطابق قالب قبلی استایل بگیره. (در نظر بگیرید که توی این قالب بوت استرپ نداریم پس
media query
ها دستی نوشته بشه. ولی کار سختی نیست.
مشابه: https://basemax.github.io/POSSF
|
1.0
|
Team Page: style need - سلام صفحه تیم الان استایل نداره. باید مطابق قالب قبلی استایل بگیره. (در نظر بگیرید که توی این قالب بوت استرپ نداریم پس
media query
ها دستی نوشته بشه. ولی کار سختی نیست.
مشابه: https://basemax.github.io/POSSF
|
non_defect
|
team page style need سلام صفحه تیم الان استایل نداره باید مطابق قالب قبلی استایل بگیره در نظر بگیرید که توی این قالب بوت استرپ نداریم پس media query ها دستی نوشته بشه ولی کار سختی نیست مشابه
| 0
|
482,907
| 13,915,635,215
|
IssuesEvent
|
2020-10-21 01:11:19
|
magento/magento2
|
https://api.github.com/repos/magento/magento2
|
closed
|
[Issue] MFTF: Admin Delete Customer Test Refactoring
|
Component: Customer Issue: Format is not valid Priority: P3 Progress: ready for dev Severity: S3
|
This issue is automatically created based on existing pull request: magento/magento2#28121: MFTF: Admin Delete Customer Test Refactoring
---------
This PR contains refactored MFTF Test for deleting customer entity as Admin User
Steps to reproduce:
1 - Create Customer entity
2 - Login As Admin
3 - Delete created customer
4 - Perform Assertions
|
1.0
|
[Issue] MFTF: Admin Delete Customer Test Refactoring - This issue is automatically created based on existing pull request: magento/magento2#28121: MFTF: Admin Delete Customer Test Refactoring
---------
This PR contains refactored MFTF Test for deleting customer entity as Admin User
Steps to reproduce:
1 - Create Customer entity
2 - Login As Admin
3 - Delete created customer
4 - Perform Assertions
|
non_defect
|
mftf admin delete customer test refactoring this issue is automatically created based on existing pull request magento mftf admin delete customer test refactoring this pr contains refactored mftf test for deleting customer entity as admin user steps to reproduce create customer entity login as admin delete created customer perform assertions
| 0
|
562
| 2,570,889,788
|
IssuesEvent
|
2015-02-10 13:10:40
|
literat/dsw-oddil-wp-theme
|
https://api.github.com/repos/literat/dsw-oddil-wp-theme
|
closed
|
Klikání na podstránky
|
defect template
|
ad klikání na podstránky:
- o tomto problému vím
- je to vlastnost bootstrapu, jehož filozofií nejsou moc víceúrovňové menu
- problém vzniknul, když jsem lámal bootstrap, aby byl použitelný s wordpressem, nějaké řešení někde už jsem viděl
- musím jenom implementovat
|
1.0
|
Klikání na podstránky - ad klikání na podstránky:
- o tomto problému vím
- je to vlastnost bootstrapu, jehož filozofií nejsou moc víceúrovňové menu
- problém vzniknul, když jsem lámal bootstrap, aby byl použitelný s wordpressem, nějaké řešení někde už jsem viděl
- musím jenom implementovat
|
defect
|
klikání na podstránky ad klikání na podstránky o tomto problému vím je to vlastnost bootstrapu jehož filozofií nejsou moc víceúrovňové menu problém vzniknul když jsem lámal bootstrap aby byl použitelný s wordpressem nějaké řešení někde už jsem viděl musím jenom implementovat
| 1
|
43,093
| 12,965,185,694
|
IssuesEvent
|
2020-07-20 21:51:12
|
ingridc/InscripcionUNQ
|
https://api.github.com/repos/ingridc/InscripcionUNQ
|
opened
|
CVE-2014-0114 (High) detected in commons-beanutils-1.9.2.jar
|
security vulnerability
|
## CVE-2014-0114 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>commons-beanutils-1.9.2.jar</b></p></summary>
<p>Apache Commons BeanUtils provides an easy-to-use but flexible wrapper around reflection and introspection.</p>
<p>Path to vulnerable library: /InscripcionUNQ/target/inscripcionunq-0.0.1-SNAPSHOT/WEB-INF/lib/commons-beanutils-1.9.2.jar,/home/wss-scanner/.m2/repository/commons-beanutils/commons-beanutils/1.9.2/commons-beanutils-1.9.2.jar</p>
<p>
Dependency Hierarchy:
- :x: **commons-beanutils-1.9.2.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/ingridc/InscripcionUNQ/commit/d9cb7906db7f6ee073aeb15e2e5e30eaa58ce214">d9cb7906db7f6ee073aeb15e2e5e30eaa58ce214</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Apache Commons BeanUtils, as distributed in lib/commons-beanutils-1.8.0.jar in Apache Struts 1.x through 1.3.10 and in other products requiring commons-beanutils through 1.9.2, does not suppress the class property, which allows remote attackers to "manipulate" the ClassLoader and execute arbitrary code via the class parameter, as demonstrated by the passing of this parameter to the getClass method of the ActionForm object in Struts 1.
<p>Publish Date: 2014-04-30
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2014-0114>CVE-2014-0114</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 2 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics not available</p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2014-0114">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2014-0114</a></p>
<p>Release Date: 2014-04-30</p>
<p>Fix Resolution: commons-beanutils:commons-beanutils:1.9.4;org.apache.struts:struts2-core:2.0.5</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2014-0114 (High) detected in commons-beanutils-1.9.2.jar - ## CVE-2014-0114 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>commons-beanutils-1.9.2.jar</b></p></summary>
<p>Apache Commons BeanUtils provides an easy-to-use but flexible wrapper around reflection and introspection.</p>
<p>Path to vulnerable library: /InscripcionUNQ/target/inscripcionunq-0.0.1-SNAPSHOT/WEB-INF/lib/commons-beanutils-1.9.2.jar,/home/wss-scanner/.m2/repository/commons-beanutils/commons-beanutils/1.9.2/commons-beanutils-1.9.2.jar</p>
<p>
Dependency Hierarchy:
- :x: **commons-beanutils-1.9.2.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/ingridc/InscripcionUNQ/commit/d9cb7906db7f6ee073aeb15e2e5e30eaa58ce214">d9cb7906db7f6ee073aeb15e2e5e30eaa58ce214</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Apache Commons BeanUtils, as distributed in lib/commons-beanutils-1.8.0.jar in Apache Struts 1.x through 1.3.10 and in other products requiring commons-beanutils through 1.9.2, does not suppress the class property, which allows remote attackers to "manipulate" the ClassLoader and execute arbitrary code via the class parameter, as demonstrated by the passing of this parameter to the getClass method of the ActionForm object in Struts 1.
<p>Publish Date: 2014-04-30
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2014-0114>CVE-2014-0114</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 2 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics not available</p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2014-0114">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2014-0114</a></p>
<p>Release Date: 2014-04-30</p>
<p>Fix Resolution: commons-beanutils:commons-beanutils:1.9.4;org.apache.struts:struts2-core:2.0.5</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_defect
|
cve high detected in commons beanutils jar cve high severity vulnerability vulnerable library commons beanutils jar apache commons beanutils provides an easy to use but flexible wrapper around reflection and introspection path to vulnerable library inscripcionunq target inscripcionunq snapshot web inf lib commons beanutils jar home wss scanner repository commons beanutils commons beanutils commons beanutils jar dependency hierarchy x commons beanutils jar vulnerable library found in head commit a href vulnerability details apache commons beanutils as distributed in lib commons beanutils jar in apache struts x through and in other products requiring commons beanutils through does not suppress the class property which allows remote attackers to manipulate the classloader and execute arbitrary code via the class parameter as demonstrated by the passing of this parameter to the getclass method of the actionform object in struts publish date url a href cvss score details base score metrics not available suggested fix type upgrade version origin a href release date fix resolution commons beanutils commons beanutils org apache struts core step up your open source security game with whitesource
| 0
|
12,465
| 2,700,589,414
|
IssuesEvent
|
2015-04-04 10:05:51
|
vaites/gnome-integration-thunderbird
|
https://api.github.com/repos/vaites/gnome-integration-thunderbird
|
closed
|
Account tab doesn't show all the accounts
|
auto-migrated Priority-Medium Type-Defect
|
```
What steps will reproduce the problem?
1. Using recent Thunderbird with gecko 17+
What is the expected output? What do you see instead?
Account doesn't appear in the configuration tab
I've fixed the issue with the attached patch.
Cheers,
Roberto
```
Original issue reported on code.google.com by `roby.fic...@gmail.com` on 17 Oct 2013 at 10:07
Attachments:
* [patch.diff](https://storage.googleapis.com/google-code-attachments/gnome-integration-thunderbird/issue-3/comment-0/patch.diff)
|
1.0
|
Account tab doesn't show all the accounts - ```
What steps will reproduce the problem?
1. Using recent Thunderbird with gecko 17+
What is the expected output? What do you see instead?
Account doesn't appear in the configuration tab
I've fixed the issue with the attached patch.
Cheers,
Roberto
```
Original issue reported on code.google.com by `roby.fic...@gmail.com` on 17 Oct 2013 at 10:07
Attachments:
* [patch.diff](https://storage.googleapis.com/google-code-attachments/gnome-integration-thunderbird/issue-3/comment-0/patch.diff)
|
defect
|
account tab doesn t show all the accounts what steps will reproduce the problem using recent thunderbird with gecko what is the expected output what do you see instead account doesn t appear in the configuration tab i ve fixed the issue with the attached patch cheers roberto original issue reported on code google com by roby fic gmail com on oct at attachments
| 1
|
53,041
| 13,260,838,677
|
IssuesEvent
|
2020-08-20 18:50:48
|
icecube-trac/tix4
|
https://api.github.com/repos/icecube-trac/tix4
|
closed
|
I3Tray finish gets called due to exceptions in constructors/Configure methods (Trac #615)
|
IceTray Migrated from Trac defect
|
An exception in service or module constructors causes I3Finish to be called without there being a driving module yet, which seems to confuse I3Finish and makes it throw its own exception. The resulting second exception could be confusing to users..
Here is an example of how this looks like on the standard output:
lilliput/private/minimizer/I3GSLMultiMin.cxx:105: ERROR: The "vector_bfgs2" minimizer is not available in your version of GSL. Update to version 1.14 or newer.
/lilliput/private/minimizer/I3GSLMultiMin.cxx:189: FATAL: Unknown minimizer algorithm "vector_bfgs2"!
Traceback (most recent call last):
File "./aartfit.py", line 553, in <module>
tray.Execute()
File "/Users/claudio/Documents/Uni/IceTray/test/build.searecsim.release/lib/I3Tray.py", line 118, in Execute
args[0].the_tray.Execute()
RuntimeError: Unknown minimizer algorithm "vector_bfgs2"!
I3Tray finishing...
/icetray/private/icetray/I3Tray.cxx:457: FATAL: Attempt to call finish, but there is no driving module. Did you forget to call Execute()?
terminate called after throwing an instance of 'std::runtime_error'
what(): Attempt to call finish, but there is no driving module. Did you forget to call Execute()?
Abort trap
<details>
<summary><em>Migrated from <a href="https://code.icecube.wisc.edu/projects/icecube/ticket/615">https://code.icecube.wisc.edu/projects/icecube/ticket/615</a>, reported by icecubeand owned by troy</em></summary>
<p>
```json
{
"status": "closed",
"changetime": "2012-10-31T20:57:48",
"_ts": "1351717068000000",
"description": "An exception in service or module constructors causes I3Finish to be called without there being a driving module yet, which seems to confuse I3Finish and makes it throw its own exception. The resulting second exception could be confusing to users..\n\n\n\nHere is an example of how this looks like on the standard output:\n\nlilliput/private/minimizer/I3GSLMultiMin.cxx:105: ERROR: The \"vector_bfgs2\" minimizer is not available in your version of GSL. Update to version 1.14 or newer.\n/lilliput/private/minimizer/I3GSLMultiMin.cxx:189: FATAL: Unknown minimizer algorithm \"vector_bfgs2\"!\nTraceback (most recent call last):\n File \"./aartfit.py\", line 553, in <module>\n tray.Execute()\n File \"/Users/claudio/Documents/Uni/IceTray/test/build.searecsim.release/lib/I3Tray.py\", line 118, in Execute\n args[0].the_tray.Execute()\nRuntimeError: Unknown minimizer algorithm \"vector_bfgs2\"!\nI3Tray finishing...\n/icetray/private/icetray/I3Tray.cxx:457: FATAL: Attempt to call finish, but there is no driving module. Did you forget to call Execute()?\nterminate called after throwing an instance of 'std::runtime_error'\n what(): Attempt to call finish, but there is no driving module. Did you forget to call Execute()?\nAbort trap\n",
"reporter": "icecube",
"cc": "claudio.kopper@physik.uni-erlangen.de",
"resolution": "fixed",
"time": "2010-07-30T10:11:17",
"component": "IceTray",
"summary": "I3Tray finish gets called due to exceptions in constructors/Configure methods",
"priority": "normal",
"keywords": "",
"milestone": "",
"owner": "troy",
"type": "defect"
}
```
</p>
</details>
|
1.0
|
I3Tray finish gets called due to exceptions in constructors/Configure methods (Trac #615) - An exception in service or module constructors causes I3Finish to be called without there being a driving module yet, which seems to confuse I3Finish and makes it throw its own exception. The resulting second exception could be confusing to users..
Here is an example of how this looks like on the standard output:
lilliput/private/minimizer/I3GSLMultiMin.cxx:105: ERROR: The "vector_bfgs2" minimizer is not available in your version of GSL. Update to version 1.14 or newer.
/lilliput/private/minimizer/I3GSLMultiMin.cxx:189: FATAL: Unknown minimizer algorithm "vector_bfgs2"!
Traceback (most recent call last):
File "./aartfit.py", line 553, in <module>
tray.Execute()
File "/Users/claudio/Documents/Uni/IceTray/test/build.searecsim.release/lib/I3Tray.py", line 118, in Execute
args[0].the_tray.Execute()
RuntimeError: Unknown minimizer algorithm "vector_bfgs2"!
I3Tray finishing...
/icetray/private/icetray/I3Tray.cxx:457: FATAL: Attempt to call finish, but there is no driving module. Did you forget to call Execute()?
terminate called after throwing an instance of 'std::runtime_error'
what(): Attempt to call finish, but there is no driving module. Did you forget to call Execute()?
Abort trap
<details>
<summary><em>Migrated from <a href="https://code.icecube.wisc.edu/projects/icecube/ticket/615">https://code.icecube.wisc.edu/projects/icecube/ticket/615</a>, reported by icecubeand owned by troy</em></summary>
<p>
```json
{
"status": "closed",
"changetime": "2012-10-31T20:57:48",
"_ts": "1351717068000000",
"description": "An exception in service or module constructors causes I3Finish to be called without there being a driving module yet, which seems to confuse I3Finish and makes it throw its own exception. The resulting second exception could be confusing to users..\n\n\n\nHere is an example of how this looks like on the standard output:\n\nlilliput/private/minimizer/I3GSLMultiMin.cxx:105: ERROR: The \"vector_bfgs2\" minimizer is not available in your version of GSL. Update to version 1.14 or newer.\n/lilliput/private/minimizer/I3GSLMultiMin.cxx:189: FATAL: Unknown minimizer algorithm \"vector_bfgs2\"!\nTraceback (most recent call last):\n File \"./aartfit.py\", line 553, in <module>\n tray.Execute()\n File \"/Users/claudio/Documents/Uni/IceTray/test/build.searecsim.release/lib/I3Tray.py\", line 118, in Execute\n args[0].the_tray.Execute()\nRuntimeError: Unknown minimizer algorithm \"vector_bfgs2\"!\nI3Tray finishing...\n/icetray/private/icetray/I3Tray.cxx:457: FATAL: Attempt to call finish, but there is no driving module. Did you forget to call Execute()?\nterminate called after throwing an instance of 'std::runtime_error'\n what(): Attempt to call finish, but there is no driving module. Did you forget to call Execute()?\nAbort trap\n",
"reporter": "icecube",
"cc": "claudio.kopper@physik.uni-erlangen.de",
"resolution": "fixed",
"time": "2010-07-30T10:11:17",
"component": "IceTray",
"summary": "I3Tray finish gets called due to exceptions in constructors/Configure methods",
"priority": "normal",
"keywords": "",
"milestone": "",
"owner": "troy",
"type": "defect"
}
```
</p>
</details>
|
defect
|
finish gets called due to exceptions in constructors configure methods trac an exception in service or module constructors causes to be called without there being a driving module yet which seems to confuse and makes it throw its own exception the resulting second exception could be confusing to users here is an example of how this looks like on the standard output lilliput private minimizer cxx error the vector minimizer is not available in your version of gsl update to version or newer lilliput private minimizer cxx fatal unknown minimizer algorithm vector traceback most recent call last file aartfit py line in tray execute file users claudio documents uni icetray test build searecsim release lib py line in execute args the tray execute runtimeerror unknown minimizer algorithm vector finishing icetray private icetray cxx fatal attempt to call finish but there is no driving module did you forget to call execute terminate called after throwing an instance of std runtime error what attempt to call finish but there is no driving module did you forget to call execute abort trap migrated from json status closed changetime ts description an exception in service or module constructors causes to be called without there being a driving module yet which seems to confuse and makes it throw its own exception the resulting second exception could be confusing to users n n n nhere is an example of how this looks like on the standard output n nlilliput private minimizer cxx error the vector minimizer is not available in your version of gsl update to version or newer n lilliput private minimizer cxx fatal unknown minimizer algorithm vector ntraceback most recent call last n file aartfit py line in n tray execute n file users claudio documents uni icetray test build searecsim release lib py line in execute n args the tray execute nruntimeerror unknown minimizer algorithm vector finishing n icetray private icetray cxx fatal attempt to call finish but there is no driving module did you forget to call execute nterminate called after throwing an instance of std runtime error n what attempt to call finish but there is no driving module did you forget to call execute nabort trap n reporter icecube cc claudio kopper physik uni erlangen de resolution fixed time component icetray summary finish gets called due to exceptions in constructors configure methods priority normal keywords milestone owner troy type defect
| 1
|
122,331
| 10,219,288,077
|
IssuesEvent
|
2019-08-15 18:11:24
|
dexpenses/dexpenses-extract
|
https://api.github.com/repos/dexpenses/dexpenses-extract
|
opened
|
Implement test receipt normal/goe-bonita-kaufpark-debit
|
enhancement test-data
|
Receipt to implement:

|
1.0
|
Implement test receipt normal/goe-bonita-kaufpark-debit - Receipt to implement:

|
non_defect
|
implement test receipt normal goe bonita kaufpark debit receipt to implement normal goe bonita kaufpark debit
| 0
|
65,260
| 19,303,039,078
|
IssuesEvent
|
2021-12-13 08:35:12
|
line/centraldogma
|
https://api.github.com/repos/line/centraldogma
|
closed
|
Disallow to create non-reserved files in meta repository
|
defect good first issue
|
A meta repository is a special repository to manage a project.
So it has restricted permission.
However, some users use a meta repository as a normal repository that we do not expect.
It would be nice to prohibit users from creating personal files.
|
1.0
|
Disallow to create non-reserved files in meta repository - A meta repository is a special repository to manage a project.
So it has restricted permission.
However, some users use a meta repository as a normal repository that we do not expect.
It would be nice to prohibit users from creating personal files.
|
defect
|
disallow to create non reserved files in meta repository a meta repository is a special repository to manage a project so it has restricted permission however some users use a meta repository as a normal repository that we do not expect it would be nice to prohibit users from creating personal files
| 1
|
54,398
| 13,644,631,180
|
IssuesEvent
|
2020-09-25 19:14:43
|
mozilla/experimenter
|
https://api.github.com/repos/mozilla/experimenter
|
closed
|
Intermittent test failure: Rapid serializers
|
Defect Tests
|
Looks like an ordering issue, prolly need to put them in a set or check membership
```
_ TestExperimentRapidRecipeSerializer.test_serializer_outputs_expected_schema_for_accepted _
[gw1] linux -- Python 3.8.2 /usr/local/bin/python
self = <experimenter.experiments.tests.api.v4.test_serializers.TestExperimentRapidRecipeSerializer testMethod=test_serializer_outputs_expected_schema_for_accepted>
def test_serializer_outputs_expected_schema_for_accepted(self):
audience = "us_only"
features = ["pinned_tabs", "picture_in_picture"]
experiment = ExperimentRapidFactory.create_with_status(
Experiment.STATUS_ACCEPTED,
audience=audience,
features=features,
firefox_channel=Experiment.CHANNEL_RELEASE,
firefox_min_version="80.0",
)
serializer = ExperimentRapidRecipeSerializer(experiment)
data = serializer.data
arguments = data.pop("arguments")
branches = arguments.pop("branches")
self.assertDictEqual(
data,
{
"id": experiment.recipe_slug,
"filter_expression": "env.version|versionCompare('80.!') >= 0",
"targeting": f'[userId, "{experiment.recipe_slug}"]'
"|bucketSample(0, 100, 10000) "
"&& localeLanguageCode == 'en' && region == 'US' "
"&& browserSettings.update.channel == 'release'",
"enabled": True,
},
)
self.assertDictEqual(
dict(arguments),
{
"userFacingName": experiment.name,
"userFacingDescription": experiment.public_description,
"slug": experiment.recipe_slug,
"active": True,
"isEnrollmentPaused": False,
"endDate": None,
"proposedEnrollment": experiment.proposed_enrollment,
"features": features,
"referenceBranch": "control",
"startDate": None,
"bucketConfig": {
"count": experiment.bucket.count,
"namespace": experiment.bucket.namespace.name,
"randomizationUnit": "userId",
"start": experiment.bucket.start,
"total": experiment.bucket.namespace.total,
},
},
)
converted_branches = [dict(branch) for branch in branches]
> self.assertEqual(
converted_branches,
[
{"ratio": 33, "slug": "treatment", "value": None},
{"ratio": 33, "slug": "control", "value": None},
],
)
E AssertionError: Lists differ: [{'slug': 'control', 'ratio': 33, 'value': N[51 chars]one}] != [{'ratio': 33, 'slug': 'treatment', 'value':[51 chars]one}]
E
E First differing element 0:
E {'slug': 'control', 'ratio': 33, 'value': None}
E {'ratio': 33, 'slug': 'treatment', 'value': None}
E
E - [{'ratio': 33, 'slug': 'control', 'value': None},
E - {'ratio': 33, 'slug': 'treatment', 'value': None}]
E ? ^ ^
E
E + [{'ratio': 33, 'slug': 'treatment', 'value': None},
E ? ^ ^
E
E + {'ratio': 33, 'slug': 'control', 'value': None}]
experimenter/experiments/tests/api/v4/test_serializers.py:64: AssertionError
_ TestExperimentRapidRecipeSerializer.test_serializer_outputs_expected_schema_for_live _
[gw1] linux -- Python 3.8.2 /usr/local/bin/python
self = <experimenter.experiments.tests.api.v4.test_serializers.TestExperimentRapidRecipeSerializer testMethod=test_serializer_outputs_expected_schema_for_live>
def test_serializer_outputs_expected_schema_for_live(self):
audience = "us_only"
features = ["pinned_tabs", "picture_in_picture"]
experiment = ExperimentRapidFactory.create_with_status(
Experiment.STATUS_LIVE,
audience=audience,
features=features,
firefox_channel=Experiment.CHANNEL_RELEASE,
firefox_min_version="80.0",
)
serializer = ExperimentRapidRecipeSerializer(experiment)
data = serializer.data
arguments = data.pop("arguments")
branches = arguments.pop("branches")
self.assertDictEqual(
data,
{
"id": experiment.recipe_slug,
"filter_expression": "env.version|versionCompare('80.!') >= 0",
"targeting": f'[userId, "{experiment.recipe_slug}"]'
"|bucketSample(0, 100, 10000) "
"&& localeLanguageCode == 'en' && region == 'US' "
"&& browserSettings.update.channel == 'release'",
"enabled": True,
},
)
self.assertDictEqual(
dict(arguments),
{
"userFacingName": experiment.name,
"userFacingDescription": experiment.public_description,
"slug": experiment.recipe_slug,
"active": True,
"isEnrollmentPaused": False,
"endDate": experiment.end_date.isoformat(),
"proposedEnrollment": experiment.proposed_enrollment,
"features": features,
"referenceBranch": "control",
"startDate": experiment.start_date.isoformat(),
"bucketConfig": {
"count": experiment.bucket.count,
"namespace": experiment.bucket.namespace.name,
"randomizationUnit": "userId",
"start": experiment.bucket.start,
"total": experiment.bucket.namespace.total,
},
},
)
converted_branches = [dict(branch) for branch in branches]
> self.assertEqual(
converted_branches,
[
{"ratio": 33, "slug": "treatment", "value": None},
{"ratio": 33, "slug": "control", "value": None},
],
)
E AssertionError: Lists differ: [{'slug': 'control', 'ratio': 33, 'value': N[51 chars]one}] != [{'ratio': 33, 'slug': 'treatment', 'value':[51 chars]one}]
E
E First differing element 0:
E {'slug': 'control', 'ratio': 33, 'value': None}
E {'ratio': 33, 'slug': 'treatment', 'value': None}
E
E - [{'ratio': 33, 'slug': 'control', 'value': None},
E - {'ratio': 33, 'slug': 'treatment', 'value': None}]
E ? ^ ^
E
E + [{'ratio': 33, 'slug': 'treatment', 'value': None},
E ? ^ ^
E
E + {'ratio': 33, 'slug': 'control', 'value': None}]
experimenter/experiments/tests/api/v4/test_serializers.py:125: AssertionError
```
┆Issue is synchronized with this [Jira Task](https://jira.mozilla.com/browse/EXP-355)
┆Issue Number: EXP-355
|
1.0
|
Intermittent test failure: Rapid serializers - Looks like an ordering issue, prolly need to put them in a set or check membership
```
_ TestExperimentRapidRecipeSerializer.test_serializer_outputs_expected_schema_for_accepted _
[gw1] linux -- Python 3.8.2 /usr/local/bin/python
self = <experimenter.experiments.tests.api.v4.test_serializers.TestExperimentRapidRecipeSerializer testMethod=test_serializer_outputs_expected_schema_for_accepted>
def test_serializer_outputs_expected_schema_for_accepted(self):
audience = "us_only"
features = ["pinned_tabs", "picture_in_picture"]
experiment = ExperimentRapidFactory.create_with_status(
Experiment.STATUS_ACCEPTED,
audience=audience,
features=features,
firefox_channel=Experiment.CHANNEL_RELEASE,
firefox_min_version="80.0",
)
serializer = ExperimentRapidRecipeSerializer(experiment)
data = serializer.data
arguments = data.pop("arguments")
branches = arguments.pop("branches")
self.assertDictEqual(
data,
{
"id": experiment.recipe_slug,
"filter_expression": "env.version|versionCompare('80.!') >= 0",
"targeting": f'[userId, "{experiment.recipe_slug}"]'
"|bucketSample(0, 100, 10000) "
"&& localeLanguageCode == 'en' && region == 'US' "
"&& browserSettings.update.channel == 'release'",
"enabled": True,
},
)
self.assertDictEqual(
dict(arguments),
{
"userFacingName": experiment.name,
"userFacingDescription": experiment.public_description,
"slug": experiment.recipe_slug,
"active": True,
"isEnrollmentPaused": False,
"endDate": None,
"proposedEnrollment": experiment.proposed_enrollment,
"features": features,
"referenceBranch": "control",
"startDate": None,
"bucketConfig": {
"count": experiment.bucket.count,
"namespace": experiment.bucket.namespace.name,
"randomizationUnit": "userId",
"start": experiment.bucket.start,
"total": experiment.bucket.namespace.total,
},
},
)
converted_branches = [dict(branch) for branch in branches]
> self.assertEqual(
converted_branches,
[
{"ratio": 33, "slug": "treatment", "value": None},
{"ratio": 33, "slug": "control", "value": None},
],
)
E AssertionError: Lists differ: [{'slug': 'control', 'ratio': 33, 'value': N[51 chars]one}] != [{'ratio': 33, 'slug': 'treatment', 'value':[51 chars]one}]
E
E First differing element 0:
E {'slug': 'control', 'ratio': 33, 'value': None}
E {'ratio': 33, 'slug': 'treatment', 'value': None}
E
E - [{'ratio': 33, 'slug': 'control', 'value': None},
E - {'ratio': 33, 'slug': 'treatment', 'value': None}]
E ? ^ ^
E
E + [{'ratio': 33, 'slug': 'treatment', 'value': None},
E ? ^ ^
E
E + {'ratio': 33, 'slug': 'control', 'value': None}]
experimenter/experiments/tests/api/v4/test_serializers.py:64: AssertionError
_ TestExperimentRapidRecipeSerializer.test_serializer_outputs_expected_schema_for_live _
[gw1] linux -- Python 3.8.2 /usr/local/bin/python
self = <experimenter.experiments.tests.api.v4.test_serializers.TestExperimentRapidRecipeSerializer testMethod=test_serializer_outputs_expected_schema_for_live>
def test_serializer_outputs_expected_schema_for_live(self):
audience = "us_only"
features = ["pinned_tabs", "picture_in_picture"]
experiment = ExperimentRapidFactory.create_with_status(
Experiment.STATUS_LIVE,
audience=audience,
features=features,
firefox_channel=Experiment.CHANNEL_RELEASE,
firefox_min_version="80.0",
)
serializer = ExperimentRapidRecipeSerializer(experiment)
data = serializer.data
arguments = data.pop("arguments")
branches = arguments.pop("branches")
self.assertDictEqual(
data,
{
"id": experiment.recipe_slug,
"filter_expression": "env.version|versionCompare('80.!') >= 0",
"targeting": f'[userId, "{experiment.recipe_slug}"]'
"|bucketSample(0, 100, 10000) "
"&& localeLanguageCode == 'en' && region == 'US' "
"&& browserSettings.update.channel == 'release'",
"enabled": True,
},
)
self.assertDictEqual(
dict(arguments),
{
"userFacingName": experiment.name,
"userFacingDescription": experiment.public_description,
"slug": experiment.recipe_slug,
"active": True,
"isEnrollmentPaused": False,
"endDate": experiment.end_date.isoformat(),
"proposedEnrollment": experiment.proposed_enrollment,
"features": features,
"referenceBranch": "control",
"startDate": experiment.start_date.isoformat(),
"bucketConfig": {
"count": experiment.bucket.count,
"namespace": experiment.bucket.namespace.name,
"randomizationUnit": "userId",
"start": experiment.bucket.start,
"total": experiment.bucket.namespace.total,
},
},
)
converted_branches = [dict(branch) for branch in branches]
> self.assertEqual(
converted_branches,
[
{"ratio": 33, "slug": "treatment", "value": None},
{"ratio": 33, "slug": "control", "value": None},
],
)
E AssertionError: Lists differ: [{'slug': 'control', 'ratio': 33, 'value': N[51 chars]one}] != [{'ratio': 33, 'slug': 'treatment', 'value':[51 chars]one}]
E
E First differing element 0:
E {'slug': 'control', 'ratio': 33, 'value': None}
E {'ratio': 33, 'slug': 'treatment', 'value': None}
E
E - [{'ratio': 33, 'slug': 'control', 'value': None},
E - {'ratio': 33, 'slug': 'treatment', 'value': None}]
E ? ^ ^
E
E + [{'ratio': 33, 'slug': 'treatment', 'value': None},
E ? ^ ^
E
E + {'ratio': 33, 'slug': 'control', 'value': None}]
experimenter/experiments/tests/api/v4/test_serializers.py:125: AssertionError
```
┆Issue is synchronized with this [Jira Task](https://jira.mozilla.com/browse/EXP-355)
┆Issue Number: EXP-355
|
defect
|
intermittent test failure rapid serializers looks like an ordering issue prolly need to put them in a set or check membership testexperimentrapidrecipeserializer test serializer outputs expected schema for accepted linux python usr local bin python self def test serializer outputs expected schema for accepted self audience us only features experiment experimentrapidfactory create with status experiment status accepted audience audience features features firefox channel experiment channel release firefox min version serializer experimentrapidrecipeserializer experiment data serializer data arguments data pop arguments branches arguments pop branches self assertdictequal data id experiment recipe slug filter expression env version versioncompare targeting f bucketsample localelanguagecode en region us browsersettings update channel release enabled true self assertdictequal dict arguments userfacingname experiment name userfacingdescription experiment public description slug experiment recipe slug active true isenrollmentpaused false enddate none proposedenrollment experiment proposed enrollment features features referencebranch control startdate none bucketconfig count experiment bucket count namespace experiment bucket namespace name randomizationunit userid start experiment bucket start total experiment bucket namespace total converted branches self assertequal converted branches ratio slug treatment value none ratio slug control value none e assertionerror lists differ one one e e first differing element e slug control ratio value none e ratio slug treatment value none e e ratio slug control value none e ratio slug treatment value none e e e ratio slug treatment value none e e e ratio slug control value none experimenter experiments tests api test serializers py assertionerror testexperimentrapidrecipeserializer test serializer outputs expected schema for live linux python usr local bin python self def test serializer outputs expected schema for live self audience us only features experiment experimentrapidfactory create with status experiment status live audience audience features features firefox channel experiment channel release firefox min version serializer experimentrapidrecipeserializer experiment data serializer data arguments data pop arguments branches arguments pop branches self assertdictequal data id experiment recipe slug filter expression env version versioncompare targeting f bucketsample localelanguagecode en region us browsersettings update channel release enabled true self assertdictequal dict arguments userfacingname experiment name userfacingdescription experiment public description slug experiment recipe slug active true isenrollmentpaused false enddate experiment end date isoformat proposedenrollment experiment proposed enrollment features features referencebranch control startdate experiment start date isoformat bucketconfig count experiment bucket count namespace experiment bucket namespace name randomizationunit userid start experiment bucket start total experiment bucket namespace total converted branches self assertequal converted branches ratio slug treatment value none ratio slug control value none e assertionerror lists differ one one e e first differing element e slug control ratio value none e ratio slug treatment value none e e ratio slug control value none e ratio slug treatment value none e e e ratio slug treatment value none e e e ratio slug control value none experimenter experiments tests api test serializers py assertionerror ┆issue is synchronized with this ┆issue number exp
| 1
|
414,244
| 12,101,251,905
|
IssuesEvent
|
2020-04-20 14:57:41
|
goby-lang/goby
|
https://api.github.com/repos/goby-lang/goby
|
closed
|
Equality methods/operators
|
Feature Priority High VM
|
Ruby supports 4 equality methods/operators:
- `==`
- `===`
- `eql?`
- `equal?`
they're traditionally confusing for devs (at least, beginners), although they do have specific semantics.
I think the Goby implementation/compatibility should be discussed, in particular, before the new testing framework release, since the testing framework will need to respect all the operator semantics.
|
1.0
|
Equality methods/operators - Ruby supports 4 equality methods/operators:
- `==`
- `===`
- `eql?`
- `equal?`
they're traditionally confusing for devs (at least, beginners), although they do have specific semantics.
I think the Goby implementation/compatibility should be discussed, in particular, before the new testing framework release, since the testing framework will need to respect all the operator semantics.
|
non_defect
|
equality methods operators ruby supports equality methods operators eql equal they re traditionally confusing for devs at least beginners although they do have specific semantics i think the goby implementation compatibility should be discussed in particular before the new testing framework release since the testing framework will need to respect all the operator semantics
| 0
|
43,807
| 11,851,078,204
|
IssuesEvent
|
2020-03-24 17:34:06
|
mestrade/go-hello
|
https://api.github.com/repos/mestrade/go-hello
|
opened
|
CVE-2017-5983 - Jira-5.0.4(java)
|
security/defectDojo
|
*CVE-2017-5983 - Jira-5.0.4(java)*
*Severity:* Critical
*Cve:* CVE-2017-5983
*Product/Engagement:* test / AdHoc Import - Tue, 24 Mar 2020 15:57:40
*Systems*:
*Description*:
Image hash: sha256:8a3e381ece363cb5f0187e5f24988a8febd98e76cd5bc0562d443845066d6e58
Package: jira-5.0.4
Package path: /usr/share/jenkins/jenkins.war:WEB-INF/plugins/jira.hpi:WEB-INF/lib/jira-rest-java-client-api-5.0.4.jar
Package type: java
Feed: nvdv2/nvdv2:cves
CVE: CVE-2017-5983
CPE: cpe:/a:-:jira:5.0.4:-:-
*Mitigation*:
Upgrade to jira None
URL: https://nvd.nist.gov/vuln/detail/CVE-2017-5983
*Impact*:
*References*:https://nvd.nist.gov/vuln/detail/CVE-2017-5983
|
1.0
|
CVE-2017-5983 - Jira-5.0.4(java) - *CVE-2017-5983 - Jira-5.0.4(java)*
*Severity:* Critical
*Cve:* CVE-2017-5983
*Product/Engagement:* test / AdHoc Import - Tue, 24 Mar 2020 15:57:40
*Systems*:
*Description*:
Image hash: sha256:8a3e381ece363cb5f0187e5f24988a8febd98e76cd5bc0562d443845066d6e58
Package: jira-5.0.4
Package path: /usr/share/jenkins/jenkins.war:WEB-INF/plugins/jira.hpi:WEB-INF/lib/jira-rest-java-client-api-5.0.4.jar
Package type: java
Feed: nvdv2/nvdv2:cves
CVE: CVE-2017-5983
CPE: cpe:/a:-:jira:5.0.4:-:-
*Mitigation*:
Upgrade to jira None
URL: https://nvd.nist.gov/vuln/detail/CVE-2017-5983
*Impact*:
*References*:https://nvd.nist.gov/vuln/detail/CVE-2017-5983
|
defect
|
cve jira java cve jira java severity critical cve cve product engagement test adhoc import tue mar systems description image hash package jira package path usr share jenkins jenkins war web inf plugins jira hpi web inf lib jira rest java client api jar package type java feed cves cve cve cpe cpe a jira mitigation upgrade to jira none url impact references
| 1
|
37,788
| 8,518,431,208
|
IssuesEvent
|
2018-11-01 11:35:29
|
jccastillo0007/eFacturaT
|
https://api.github.com/repos/jccastillo0007/eFacturaT
|
opened
|
Cancelación escritorio - una factura cancelada previamente no le cambia el status
|
bug defect
|
Cuando existe una factura que ha sido cancelada previamente, si te lo indica como respuesta.
Lo que no hace es cambiar el status
|
1.0
|
Cancelación escritorio - una factura cancelada previamente no le cambia el status - Cuando existe una factura que ha sido cancelada previamente, si te lo indica como respuesta.
Lo que no hace es cambiar el status
|
defect
|
cancelación escritorio una factura cancelada previamente no le cambia el status cuando existe una factura que ha sido cancelada previamente si te lo indica como respuesta lo que no hace es cambiar el status
| 1
|
11,774
| 18,063,699,287
|
IssuesEvent
|
2021-09-20 16:32:14
|
Azure/az-hop
|
https://api.github.com/repos/Azure/az-hop
|
opened
|
users named with 9 integers only are failing to generate SSH Key
|
kind/bug area/user-management customer-requirement
|
When defining users named with 9 integers only, the creation of the SSH key will failed.
Workaround : prefix the username with characters
|
1.0
|
users named with 9 integers only are failing to generate SSH Key - When defining users named with 9 integers only, the creation of the SSH key will failed.
Workaround : prefix the username with characters
|
non_defect
|
users named with integers only are failing to generate ssh key when defining users named with integers only the creation of the ssh key will failed workaround prefix the username with characters
| 0
|
210,721
| 7,192,886,965
|
IssuesEvent
|
2018-02-03 09:52:06
|
Dallas-Makerspace/tracker
|
https://api.github.com/repos/Dallas-Makerspace/tracker
|
closed
|
(feat) 3dfab website needs updating
|
Priority/MEDIUM enhancement help wanted wontfix
|
The 3d Printer status page does not reference the correct printers no has all the printer cams.
As a user
I would like to have all the cams working
And the push notifications working
And Browser notifications working
And Able to add my talk username to a printer queue
So that I can have a tool that helps me make things better.
|
1.0
|
(feat) 3dfab website needs updating - The 3d Printer status page does not reference the correct printers no has all the printer cams.
As a user
I would like to have all the cams working
And the push notifications working
And Browser notifications working
And Able to add my talk username to a printer queue
So that I can have a tool that helps me make things better.
|
non_defect
|
feat website needs updating the printer status page does not reference the correct printers no has all the printer cams as a user i would like to have all the cams working and the push notifications working and browser notifications working and able to add my talk username to a printer queue so that i can have a tool that helps me make things better
| 0
|
485,433
| 13,965,501,694
|
IssuesEvent
|
2020-10-25 22:45:16
|
nhcarrigan/we-love-hacktoberfest
|
https://api.github.com/repos/nhcarrigan/we-love-hacktoberfest
|
closed
|
[DOC] - Add more reasons to the README
|
good first issue help wanted 📄 aspect: text 🟩 priority: low
|
# Incorrect Documentation
## Describe the error
<!--A clear and concise description of the incorrect documentation information.-->
The readme only has 5 reasons why we love Matt.
## Expected information
<!--A clear and concise description of what the documentation *should* say.-->
There are **way** more reasons to love Matt. Add your reasons to the README!
## Additional information
<!--Add any other context about the problem here.-->
|
1.0
|
[DOC] - Add more reasons to the README - # Incorrect Documentation
## Describe the error
<!--A clear and concise description of the incorrect documentation information.-->
The readme only has 5 reasons why we love Matt.
## Expected information
<!--A clear and concise description of what the documentation *should* say.-->
There are **way** more reasons to love Matt. Add your reasons to the README!
## Additional information
<!--Add any other context about the problem here.-->
|
non_defect
|
add more reasons to the readme incorrect documentation describe the error the readme only has reasons why we love matt expected information there are way more reasons to love matt add your reasons to the readme additional information
| 0
|
223,585
| 17,610,615,993
|
IssuesEvent
|
2021-08-18 00:21:45
|
cockroachdb/cockroach
|
https://api.github.com/repos/cockroachdb/cockroach
|
closed
|
acceptance: TestDockerPSQL failed
|
C-test-failure O-robot branch-master T-sql-experience
|
[(acceptance).TestDockerPSQL failed](https://teamcity.cockroachdb.com/viewLog.html?buildId=2561262&tab=buildLog) on [master@156ca4133fa44c0d6b09c494cf38d8b4216d5bbf](https://github.com/cockroachdb/cockroach/commits/156ca4133fa44c0d6b09c494cf38d8b4216d5bbf):
```
=== RUN TestDockerPSQL
test_log_scope.go:72: test logs captured to: /home/agent/work/.go/src/github.com/cockroachdb/cockroach/artifacts/acceptance/logTestDockerPSQL213242319
test_log_scope.go:73: use -show-logs to present logs inline
=== CONT TestDockerPSQL
adapter_test.go:104: -- test log scope end --
test logs left over in: /home/agent/work/.go/src/github.com/cockroachdb/cockroach/artifacts/acceptance/logTestDockerPSQL213242319
--- FAIL: TestDockerPSQL (6.41s)
=== RUN TestDockerPSQL/runMode=docker
Flag --logtostderr has been deprecated, use --log instead to specify sinks.stderr.filter.
Cluster successfully initialized
(1 row)
CREATE DATABASE
CREATE TABLE
CREATE TABLE
COPY 2
1 | slide | blue | south | 2014-04-28 | 192.168.0.1
2 | swing | yellow | northwest | 2010-08-16 | ffff::ffff:12
COPY 1
3 | rope | green | east | 2015-01-02 | 192.168.0.1
1 | slide | blue | south | 2014-04-28 | 192.168.0.1
3 | rope | green | east | 2015-01-02 | 192.168.0.1
COPY 1
4 | sand | brown | west | 2016-03-04 | 192.168.0.1
psql
ERROR: expected 6 values, got 1
hooray
(0 rows)
COPY 1000
1000
Testing large row
CREATE TABLE
COPY 1
1
(1 row)
10000
Testing copy error
ERROR: relation "missing" does not exist
CREATE TABLE AS
dockercluster.go:683: unexpected extra event &{0 die} (after [])
--- FAIL: TestDockerPSQL/runMode=docker (6.41s)
```
<details><summary>More</summary><p>
Parameters:
- GOFLAGS=-json
```
make stressrace TESTS=TestDockerPSQL PKG=./pkg/acceptance TESTTIMEOUT=5m STRESSFLAGS='-timeout 5m' 2>&1
```
[See this test on roachdash](https://roachdash.crdb.dev/?filter=status%3Aopen+t%3A.%2ATestDockerPSQL.%2A&sort=title&restgroup=false&display=lastcommented+project)
<sub>powered by [pkg/cmd/internal/issues](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues)</sub></p></details>
|
1.0
|
acceptance: TestDockerPSQL failed - [(acceptance).TestDockerPSQL failed](https://teamcity.cockroachdb.com/viewLog.html?buildId=2561262&tab=buildLog) on [master@156ca4133fa44c0d6b09c494cf38d8b4216d5bbf](https://github.com/cockroachdb/cockroach/commits/156ca4133fa44c0d6b09c494cf38d8b4216d5bbf):
```
=== RUN TestDockerPSQL
test_log_scope.go:72: test logs captured to: /home/agent/work/.go/src/github.com/cockroachdb/cockroach/artifacts/acceptance/logTestDockerPSQL213242319
test_log_scope.go:73: use -show-logs to present logs inline
=== CONT TestDockerPSQL
adapter_test.go:104: -- test log scope end --
test logs left over in: /home/agent/work/.go/src/github.com/cockroachdb/cockroach/artifacts/acceptance/logTestDockerPSQL213242319
--- FAIL: TestDockerPSQL (6.41s)
=== RUN TestDockerPSQL/runMode=docker
Flag --logtostderr has been deprecated, use --log instead to specify sinks.stderr.filter.
Cluster successfully initialized
(1 row)
CREATE DATABASE
CREATE TABLE
CREATE TABLE
COPY 2
1 | slide | blue | south | 2014-04-28 | 192.168.0.1
2 | swing | yellow | northwest | 2010-08-16 | ffff::ffff:12
COPY 1
3 | rope | green | east | 2015-01-02 | 192.168.0.1
1 | slide | blue | south | 2014-04-28 | 192.168.0.1
3 | rope | green | east | 2015-01-02 | 192.168.0.1
COPY 1
4 | sand | brown | west | 2016-03-04 | 192.168.0.1
psql
ERROR: expected 6 values, got 1
hooray
(0 rows)
COPY 1000
1000
Testing large row
CREATE TABLE
COPY 1
1
(1 row)
10000
Testing copy error
ERROR: relation "missing" does not exist
CREATE TABLE AS
dockercluster.go:683: unexpected extra event &{0 die} (after [])
--- FAIL: TestDockerPSQL/runMode=docker (6.41s)
```
<details><summary>More</summary><p>
Parameters:
- GOFLAGS=-json
```
make stressrace TESTS=TestDockerPSQL PKG=./pkg/acceptance TESTTIMEOUT=5m STRESSFLAGS='-timeout 5m' 2>&1
```
[See this test on roachdash](https://roachdash.crdb.dev/?filter=status%3Aopen+t%3A.%2ATestDockerPSQL.%2A&sort=title&restgroup=false&display=lastcommented+project)
<sub>powered by [pkg/cmd/internal/issues](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues)</sub></p></details>
|
non_defect
|
acceptance testdockerpsql failed on run testdockerpsql test log scope go test logs captured to home agent work go src github com cockroachdb cockroach artifacts acceptance test log scope go use show logs to present logs inline cont testdockerpsql adapter test go test log scope end test logs left over in home agent work go src github com cockroachdb cockroach artifacts acceptance fail testdockerpsql run testdockerpsql runmode docker flag logtostderr has been deprecated use log instead to specify sinks stderr filter cluster successfully initialized row create database create table create table copy slide blue south swing yellow northwest ffff ffff copy rope green east slide blue south rope green east copy sand brown west psql error expected values got hooray rows copy testing large row create table copy row testing copy error error relation missing does not exist create table as dockercluster go unexpected extra event die after fail testdockerpsql runmode docker more parameters goflags json make stressrace tests testdockerpsql pkg pkg acceptance testtimeout stressflags timeout powered by
| 0
|
293,861
| 25,328,702,387
|
IssuesEvent
|
2022-11-18 11:26:05
|
wpeventmanager/wp-event-manager-migration
|
https://api.github.com/repos/wpeventmanager/wp-event-manager-migration
|
closed
|
Expired event is not import
|
In Testing
|
Expired event is not import.
Before import event list.

**After import event list.**

|
1.0
|
Expired event is not import - Expired event is not import.
Before import event list.

**After import event list.**

|
non_defect
|
expired event is not import expired event is not import before import event list after import event list
| 0
|
2,817
| 3,882,729,864
|
IssuesEvent
|
2016-04-13 11:08:16
|
versionpress/versionpress
|
https://api.github.com/repos/versionpress/versionpress
|
closed
|
Build frontend on `gulp build`
|
improvement in review minor scope:dev-infrastructure size:xs
|
Built ZIP should contain up-to-date version of the frontend. Now it just copies already built version if there is any.
|
1.0
|
Build frontend on `gulp build` - Built ZIP should contain up-to-date version of the frontend. Now it just copies already built version if there is any.
|
non_defect
|
build frontend on gulp build built zip should contain up to date version of the frontend now it just copies already built version if there is any
| 0
|
10,033
| 2,618,932,325
|
IssuesEvent
|
2015-03-03 00:00:46
|
chrsmith/open-ig
|
https://api.github.com/repos/chrsmith/open-ig
|
closed
|
Errors in translations
|
auto-migrated Chat Labels Priority-Medium Type-Defect
|
```
Game version: open-ig-0.95.124
Operating System: Windows 7 64-bit
Java runtime version: 1.7.0.9
Installed using the Launcher? yes
Game language (en, hu, de): ru
What steps will reproduce the problem?
1. a quest to prevent ships to land on New Caroline, wich where infected by a
virus
2. attack a trade ship trying to land
3. my phrases variants and the ship's answers seem to be mixed up.
See the attached file.
For example, 2.png
the second variant translates like this:
"I repeat, I've got a new antidote onboard. You must let me land, in other case
thousnds of people will be in great danger!" - this must be a ship's message,
not mine.
```
Original issue reported on code.google.com by `chehrano...@gmail.com` on 10 Jan 2013 at 7:11
Attachments:
* [1.png](https://storage.googleapis.com/google-code-attachments/open-ig/issue-711/comment-0/1.png)
* [2.png](https://storage.googleapis.com/google-code-attachments/open-ig/issue-711/comment-0/2.png)
|
1.0
|
Errors in translations - ```
Game version: open-ig-0.95.124
Operating System: Windows 7 64-bit
Java runtime version: 1.7.0.9
Installed using the Launcher? yes
Game language (en, hu, de): ru
What steps will reproduce the problem?
1. a quest to prevent ships to land on New Caroline, wich where infected by a
virus
2. attack a trade ship trying to land
3. my phrases variants and the ship's answers seem to be mixed up.
See the attached file.
For example, 2.png
the second variant translates like this:
"I repeat, I've got a new antidote onboard. You must let me land, in other case
thousnds of people will be in great danger!" - this must be a ship's message,
not mine.
```
Original issue reported on code.google.com by `chehrano...@gmail.com` on 10 Jan 2013 at 7:11
Attachments:
* [1.png](https://storage.googleapis.com/google-code-attachments/open-ig/issue-711/comment-0/1.png)
* [2.png](https://storage.googleapis.com/google-code-attachments/open-ig/issue-711/comment-0/2.png)
|
defect
|
errors in translations game version open ig operating system windows bit java runtime version installed using the launcher yes game language en hu de ru what steps will reproduce the problem a quest to prevent ships to land on new caroline wich where infected by a virus attack a trade ship trying to land my phrases variants and the ship s answers seem to be mixed up see the attached file for example png the second variant translates like this i repeat i ve got a new antidote onboard you must let me land in other case thousnds of people will be in great danger this must be a ship s message not mine original issue reported on code google com by chehrano gmail com on jan at attachments
| 1
|
17,518
| 3,011,118,810
|
IssuesEvent
|
2015-07-28 16:18:28
|
KasaiDot/codejam-commandline
|
https://api.github.com/repos/KasaiDot/codejam-commandline
|
closed
|
VERSION "constant" incorrect
|
auto-migrated Priority-Medium Type-Defect
|
```
VERSION in "lib/constants.py" should be updated :)
```
Original issue reported on code.google.com by `sumu...@gmail.com` on 9 Jun 2012 at 8:50
|
1.0
|
VERSION "constant" incorrect - ```
VERSION in "lib/constants.py" should be updated :)
```
Original issue reported on code.google.com by `sumu...@gmail.com` on 9 Jun 2012 at 8:50
|
defect
|
version constant incorrect version in lib constants py should be updated original issue reported on code google com by sumu gmail com on jun at
| 1
|
64,316
| 18,426,366,684
|
IssuesEvent
|
2021-10-13 22:49:48
|
vector-im/element-web
|
https://api.github.com/repos/vector-im/element-web
|
opened
|
FilePanel no longer paginates.
|
T-Defect
|
### Steps to reproduce
Open the FilePanel in a room with lots of images. See that it only shows one page's worth of thumbnails, and if you scroll up it doesn't backpaginate at all any more.
### Outcome
The ability to backpaginate on the FilePanel.
### Operating system
macOS
### Application version
Nightly
### How did you install the app?
_No response_
### Homeserver
_No response_
### Will you send logs?
No
|
1.0
|
FilePanel no longer paginates. - ### Steps to reproduce
Open the FilePanel in a room with lots of images. See that it only shows one page's worth of thumbnails, and if you scroll up it doesn't backpaginate at all any more.
### Outcome
The ability to backpaginate on the FilePanel.
### Operating system
macOS
### Application version
Nightly
### How did you install the app?
_No response_
### Homeserver
_No response_
### Will you send logs?
No
|
defect
|
filepanel no longer paginates steps to reproduce open the filepanel in a room with lots of images see that it only shows one page s worth of thumbnails and if you scroll up it doesn t backpaginate at all any more outcome the ability to backpaginate on the filepanel operating system macos application version nightly how did you install the app no response homeserver no response will you send logs no
| 1
|
34,863
| 16,737,520,699
|
IssuesEvent
|
2021-06-11 05:06:20
|
tensorflow/tensorflow
|
https://api.github.com/repos/tensorflow/tensorflow
|
opened
|
Backpropagation operators get blocked by NCCL operators when TF Profiler is enabled
|
type:performance
|
**System information**
- Have I written custom code (as opposed to using a stock example script provided in TensorFlow): Yes.
- OS Platform and Distribution (e.g., Linux Ubuntu 16.04): Debian GNU/Linux 10 (buster).
- TensorFlow installed from (source or binary): binary
- TensorFlow version (use command below): 2.4.1
- Python version: 3.7.3
- Nvidia Driver: 418.116.00
- CUDA version: 11.0
- cuDNN: 8.0.5
- GPU model and memory: Tesla V100-SXM2-32GB
- NCCL version: 2.8.3
- Horovod version: 0.21.0
- Connection: 2 physical machines, each with 2 V100 GPUs
- Network: 100 Gbps RDMA
**Describe the current behavior**
When using TensorFlow's profiler `tf.profiler.experimental` API to see the timeline of a distributed training job with Horovod+NCCL, it seems that backpropagation (**BP** in short) operators are blocked by NCCL operators. The following figure is the timeline of rank 0 GPU, the row of `Stream #30` shows NCCL traces and the row of `TensorFlow Ops` shows the computation operators . Here we observe two cases of blocking:
1. When the `NCCL 1` operator starts to run, one **BP** operator is still running. The execution time of this **BP** operator is always larger than that of `NCCL 1`, saying that this **BP** operator is blocked and can only finishes after `NCCL 1` is finished.
2. Another case is that during the execution of `NCCL 2`, no `BP` operator is scheduled to run.

**Describe the expected behavior**
However, these two kinds of blocking will not occur when TensorFlow Profiler is disabled. The following figure shows the corresponding timeline (rank 0 GPU) profiled with `nvprof`. We can see that there is no obvious gap between **BP** operators or extremely long **BP** operators.

Actually, we also didn't observe these two kinds of blocking in Tensorflow 1.x, **it seems that it is a problem of the profiler of TensorFlow 2.x**
**Standalone code to reproduce the issue**
The python Script to reproduce:
https://github.com/joapolarbear/horovod/blob/b_v0.21.0_tfissue/examples/tensorflow2/tensorflow2_synthetic_benchmark.py
### Profile with TensorFlow Profiler
```
export TRACE_DIR= /path/to/dir
mpirun -np ${TOTAL_GPU_NUM} -H ${HOST_LIST} \
-bind-to none -map-by slot -mca plm_rsh_args '-p 12345' \
-mca pml ob1 -mca btl ^openib --allow-run-as-root \
python3 tensorflow2_synthetic_benchmark.py --profile_range 10,20 --trace_dir ${TRACE_DIR}
```
### Profile with nvprof
```
export TRACE_DIR=/path/to/dir
export NVPROF_CMD="nvprof -o $TRACE_DIR/simple.%q{OMPI_COMM_WORLD_RANK}.nvprof "
mpirun -np ${TOTAL_GPU_NUM} -H ${HOST_LIST} \
-bind-to none -map-by slot -mca plm_rsh_args '-p 12345' \
-mca pml ob1 -mca btl ^openib --allow-run-as-root \
${NVPROF_CMD} python3 tensorflow2_synthetic_benchmark.py
```
**Other info / logs** Include any logs or source code that would be helpful to
The trace files we used in above two examples:
[tf_profiler](https://github.com/tensorflow/tensorflow/files/6635952/trace.json.gz)
[nvprof](https://github.com/tensorflow/tensorflow/files/6635957/rank0.json.zip)
|
True
|
Backpropagation operators get blocked by NCCL operators when TF Profiler is enabled -
**System information**
- Have I written custom code (as opposed to using a stock example script provided in TensorFlow): Yes.
- OS Platform and Distribution (e.g., Linux Ubuntu 16.04): Debian GNU/Linux 10 (buster).
- TensorFlow installed from (source or binary): binary
- TensorFlow version (use command below): 2.4.1
- Python version: 3.7.3
- Nvidia Driver: 418.116.00
- CUDA version: 11.0
- cuDNN: 8.0.5
- GPU model and memory: Tesla V100-SXM2-32GB
- NCCL version: 2.8.3
- Horovod version: 0.21.0
- Connection: 2 physical machines, each with 2 V100 GPUs
- Network: 100 Gbps RDMA
**Describe the current behavior**
When using TensorFlow's profiler `tf.profiler.experimental` API to see the timeline of a distributed training job with Horovod+NCCL, it seems that backpropagation (**BP** in short) operators are blocked by NCCL operators. The following figure is the timeline of rank 0 GPU, the row of `Stream #30` shows NCCL traces and the row of `TensorFlow Ops` shows the computation operators . Here we observe two cases of blocking:
1. When the `NCCL 1` operator starts to run, one **BP** operator is still running. The execution time of this **BP** operator is always larger than that of `NCCL 1`, saying that this **BP** operator is blocked and can only finishes after `NCCL 1` is finished.
2. Another case is that during the execution of `NCCL 2`, no `BP` operator is scheduled to run.

**Describe the expected behavior**
However, these two kinds of blocking will not occur when TensorFlow Profiler is disabled. The following figure shows the corresponding timeline (rank 0 GPU) profiled with `nvprof`. We can see that there is no obvious gap between **BP** operators or extremely long **BP** operators.

Actually, we also didn't observe these two kinds of blocking in Tensorflow 1.x, **it seems that it is a problem of the profiler of TensorFlow 2.x**
**Standalone code to reproduce the issue**
The python Script to reproduce:
https://github.com/joapolarbear/horovod/blob/b_v0.21.0_tfissue/examples/tensorflow2/tensorflow2_synthetic_benchmark.py
### Profile with TensorFlow Profiler
```
export TRACE_DIR= /path/to/dir
mpirun -np ${TOTAL_GPU_NUM} -H ${HOST_LIST} \
-bind-to none -map-by slot -mca plm_rsh_args '-p 12345' \
-mca pml ob1 -mca btl ^openib --allow-run-as-root \
python3 tensorflow2_synthetic_benchmark.py --profile_range 10,20 --trace_dir ${TRACE_DIR}
```
### Profile with nvprof
```
export TRACE_DIR=/path/to/dir
export NVPROF_CMD="nvprof -o $TRACE_DIR/simple.%q{OMPI_COMM_WORLD_RANK}.nvprof "
mpirun -np ${TOTAL_GPU_NUM} -H ${HOST_LIST} \
-bind-to none -map-by slot -mca plm_rsh_args '-p 12345' \
-mca pml ob1 -mca btl ^openib --allow-run-as-root \
${NVPROF_CMD} python3 tensorflow2_synthetic_benchmark.py
```
**Other info / logs** Include any logs or source code that would be helpful to
The trace files we used in above two examples:
[tf_profiler](https://github.com/tensorflow/tensorflow/files/6635952/trace.json.gz)
[nvprof](https://github.com/tensorflow/tensorflow/files/6635957/rank0.json.zip)
|
non_defect
|
backpropagation operators get blocked by nccl operators when tf profiler is enabled system information have i written custom code as opposed to using a stock example script provided in tensorflow yes os platform and distribution e g linux ubuntu debian gnu linux buster tensorflow installed from source or binary binary tensorflow version use command below python version nvidia driver cuda version cudnn gpu model and memory tesla nccl version horovod version connection physical machines each with gpus network gbps rdma describe the current behavior when using tensorflow s profiler tf profiler experimental api to see the timeline of a distributed training job with horovod nccl it seems that backpropagation bp in short operators are blocked by nccl operators the following figure is the timeline of rank gpu the row of stream shows nccl traces and the row of tensorflow ops shows the computation operators here we observe two cases of blocking when the nccl operator starts to run one bp operator is still running the execution time of this bp operator is always larger than that of nccl saying that this bp operator is blocked and can only finishes after nccl is finished another case is that during the execution of nccl no bp operator is scheduled to run describe the expected behavior however these two kinds of blocking will not occur when tensorflow profiler is disabled the following figure shows the corresponding timeline rank gpu profiled with nvprof we can see that there is no obvious gap between bp operators or extremely long bp operators actually we also didn t observe these two kinds of blocking in tensorflow x it seems that it is a problem of the profiler of tensorflow x standalone code to reproduce the issue the python script to reproduce profile with tensorflow profiler export trace dir path to dir mpirun np total gpu num h host list bind to none map by slot mca plm rsh args p mca pml mca btl openib allow run as root synthetic benchmark py profile range trace dir trace dir profile with nvprof export trace dir path to dir export nvprof cmd nvprof o trace dir simple q ompi comm world rank nvprof mpirun np total gpu num h host list bind to none map by slot mca plm rsh args p mca pml mca btl openib allow run as root nvprof cmd synthetic benchmark py other info logs include any logs or source code that would be helpful to the trace files we used in above two examples
| 0
|
4,054
| 3,292,215,906
|
IssuesEvent
|
2015-10-30 13:41:32
|
godotengine/godot
|
https://api.github.com/repos/godotengine/godot
|
closed
|
compiling android export template on Windows 7 32bit error
|
archived bug platform:android platform:windows topic:buildsystem
|
the error is os_android.os. please help regarding this. im not able to release game on playstore without admob. so im planning to create android export template.
|
1.0
|
compiling android export template on Windows 7 32bit error - the error is os_android.os. please help regarding this. im not able to release game on playstore without admob. so im planning to create android export template.
|
non_defect
|
compiling android export template on windows error the error is os android os please help regarding this im not able to release game on playstore without admob so im planning to create android export template
| 0
|
461,110
| 13,223,832,864
|
IssuesEvent
|
2020-08-17 17:58:18
|
grpc/grpc
|
https://api.github.com/repos/grpc/grpc
|
closed
|
[Aio] grpcio 1.3.0 Simplest test fails on Windows with __dealloc__ called on running server
|
kind/bug lang/Python priority/P2
|
### What version of gRPC and what language are you using?
grpcio=1.30.0 (Python)
### What operating system (Linux, Windows,...) and version?
Windows 10
### What runtime / compiler are you using (e.g. python version or version of gcc)
python 3.8.1
### What did I do
I was trying to use python grpc on the server with asyncio implementation. Even the simplest examples from unit tests fail just after I create the server instance object.
There is a warning in terminal:
`WARNING:grpc._cython.cygrpc:__dealloc__ called on running server <grpc._cython.cygrpc.AioServer object at 0x0000022D3227E440> with status 0`
The code is dead simple:
```
async def serve():
# server = grpc.server(futures.ThreadPoolExecutor(max_workers=10))
server = aio.server();
server.add_insecure_port('[::]:9090')
log.info("Starting...")
await server.start()
log.info("Started")
await server.wait_for_termination()
log.info("Terminated.")
if __name__ == '__main__':
logging.basicConfig()
log = logging.getLogger(__name__)
log.setLevel(logging.DEBUG)
log.info("Serving from gRPC server")
loop = asyncio.get_event_loop()
loop.create_task(serve())
loop.run_forever()
```
It never gets to a point of starting the server, the app is done after the warning appears in terminal.
What am I missing? The non-asyncio version works just fine.
|
1.0
|
[Aio] grpcio 1.3.0 Simplest test fails on Windows with __dealloc__ called on running server - ### What version of gRPC and what language are you using?
grpcio=1.30.0 (Python)
### What operating system (Linux, Windows,...) and version?
Windows 10
### What runtime / compiler are you using (e.g. python version or version of gcc)
python 3.8.1
### What did I do
I was trying to use python grpc on the server with asyncio implementation. Even the simplest examples from unit tests fail just after I create the server instance object.
There is a warning in terminal:
`WARNING:grpc._cython.cygrpc:__dealloc__ called on running server <grpc._cython.cygrpc.AioServer object at 0x0000022D3227E440> with status 0`
The code is dead simple:
```
async def serve():
# server = grpc.server(futures.ThreadPoolExecutor(max_workers=10))
server = aio.server();
server.add_insecure_port('[::]:9090')
log.info("Starting...")
await server.start()
log.info("Started")
await server.wait_for_termination()
log.info("Terminated.")
if __name__ == '__main__':
logging.basicConfig()
log = logging.getLogger(__name__)
log.setLevel(logging.DEBUG)
log.info("Serving from gRPC server")
loop = asyncio.get_event_loop()
loop.create_task(serve())
loop.run_forever()
```
It never gets to a point of starting the server, the app is done after the warning appears in terminal.
What am I missing? The non-asyncio version works just fine.
|
non_defect
|
grpcio simplest test fails on windows with dealloc called on running server what version of grpc and what language are you using grpcio python what operating system linux windows and version windows what runtime compiler are you using e g python version or version of gcc python what did i do i was trying to use python grpc on the server with asyncio implementation even the simplest examples from unit tests fail just after i create the server instance object there is a warning in terminal warning grpc cython cygrpc dealloc called on running server with status the code is dead simple async def serve server grpc server futures threadpoolexecutor max workers server aio server server add insecure port log info starting await server start log info started await server wait for termination log info terminated if name main logging basicconfig log logging getlogger name log setlevel logging debug log info serving from grpc server loop asyncio get event loop loop create task serve loop run forever it never gets to a point of starting the server the app is done after the warning appears in terminal what am i missing the non asyncio version works just fine
| 0
|
78,293
| 27,416,362,084
|
IssuesEvent
|
2023-03-01 14:02:45
|
primefaces/primefaces
|
https://api.github.com/repos/primefaces/primefaces
|
opened
|
selectOneRadio: custom layout (referenced) _clone name
|
:lady_beetle: defect :bangbang: needs-triage
|
### Describe the bug
I'm using selectOneRadio with two radioButtons with custom layout (referenced). This is working in PF 10 and 11, but not in PF 12. When clicking the radioButtons the events are not sent. I see in the Chrome devtools that the id/name for the selectOneRadio is appearing twice. One with just the name specified and one with "_clone" added to it, e.g. radioOptions and radioOptions_clone. When I remove the "_clone" in devtools its working fine. In PF 11 the name does not have the "_clone" added.
If I set plain="false" in selectOneRadio it seems to work. I guess plain="false" is default. But the id for the selectOneRadio is still appearing twice, but now both get updated. One with just the id/name specified and one with "_clone"
But setting plain="true" the value for the selectOneRadio is not updated. Just the value for the "_clone" is updated.
[primefaces-test.zip](https://github.com/primefaces/primefaces/files/10861261/primefaces-test.zip)
### Reproducer
Go to devtools and see the name is ="frmTest:customRadio_clone"
### Expected behavior
selectOneRadio is working as in PF 11, "_clone" is not added to the selectOneRadio name.
### PrimeFaces edition
Community
### PrimeFaces version
12.0.0
### Theme
_No response_
### JSF implementation
Mojarra
### JSF version
2.3
### Java version
11
### Browser(s)
_No response_
|
1.0
|
selectOneRadio: custom layout (referenced) _clone name - ### Describe the bug
I'm using selectOneRadio with two radioButtons with custom layout (referenced). This is working in PF 10 and 11, but not in PF 12. When clicking the radioButtons the events are not sent. I see in the Chrome devtools that the id/name for the selectOneRadio is appearing twice. One with just the name specified and one with "_clone" added to it, e.g. radioOptions and radioOptions_clone. When I remove the "_clone" in devtools its working fine. In PF 11 the name does not have the "_clone" added.
If I set plain="false" in selectOneRadio it seems to work. I guess plain="false" is default. But the id for the selectOneRadio is still appearing twice, but now both get updated. One with just the id/name specified and one with "_clone"
But setting plain="true" the value for the selectOneRadio is not updated. Just the value for the "_clone" is updated.
[primefaces-test.zip](https://github.com/primefaces/primefaces/files/10861261/primefaces-test.zip)
### Reproducer
Go to devtools and see the name is ="frmTest:customRadio_clone"
### Expected behavior
selectOneRadio is working as in PF 11, "_clone" is not added to the selectOneRadio name.
### PrimeFaces edition
Community
### PrimeFaces version
12.0.0
### Theme
_No response_
### JSF implementation
Mojarra
### JSF version
2.3
### Java version
11
### Browser(s)
_No response_
|
defect
|
selectoneradio custom layout referenced clone name describe the bug i m using selectoneradio with two radiobuttons with custom layout referenced this is working in pf and but not in pf when clicking the radiobuttons the events are not sent i see in the chrome devtools that the id name for the selectoneradio is appearing twice one with just the name specified and one with clone added to it e g radiooptions and radiooptions clone when i remove the clone in devtools its working fine in pf the name does not have the clone added if i set plain false in selectoneradio it seems to work i guess plain false is default but the id for the selectoneradio is still appearing twice but now both get updated one with just the id name specified and one with clone but setting plain true the value for the selectoneradio is not updated just the value for the clone is updated reproducer go to devtools and see the name is frmtest customradio clone expected behavior selectoneradio is working as in pf clone is not added to the selectoneradio name primefaces edition community primefaces version theme no response jsf implementation mojarra jsf version java version browser s no response
| 1
|
191,123
| 14,593,292,079
|
IssuesEvent
|
2020-12-19 21:59:24
|
KodstarBootcamp/issue-tracker-2020-3
|
https://api.github.com/repos/KodstarBootcamp/issue-tracker-2020-3
|
opened
|
Simple issue list
|
test
|
From issue-tracker-2020-1 created by [orhanugurlu](https://github.com/orhanugurlu): KodstarBootcamp/issue-tracker-2020-1#36
From issue-tracker-2020-1 created by [orhanugurlu](https://github.com/orhanugurlu): KodstarBootcamp/issue-tracker-2020-1#35
From issue-tracker-2020-1 created by [sodemir](https://github.com/sodemir): KodstarBootcamp/issue-tracker-2020-1#32
quick and dirty screen to list all issues. dont bother with layout, ui, ux.
just write all issues, one under the other. let the user navigate to issue edit view for now
|
1.0
|
Simple issue list - From issue-tracker-2020-1 created by [orhanugurlu](https://github.com/orhanugurlu): KodstarBootcamp/issue-tracker-2020-1#36
From issue-tracker-2020-1 created by [orhanugurlu](https://github.com/orhanugurlu): KodstarBootcamp/issue-tracker-2020-1#35
From issue-tracker-2020-1 created by [sodemir](https://github.com/sodemir): KodstarBootcamp/issue-tracker-2020-1#32
quick and dirty screen to list all issues. dont bother with layout, ui, ux.
just write all issues, one under the other. let the user navigate to issue edit view for now
|
non_defect
|
simple issue list from issue tracker created by kodstarbootcamp issue tracker from issue tracker created by kodstarbootcamp issue tracker from issue tracker created by kodstarbootcamp issue tracker quick and dirty screen to list all issues dont bother with layout ui ux just write all issues one under the other let the user navigate to issue edit view for now
| 0
|
72,497
| 24,143,584,937
|
IssuesEvent
|
2022-09-21 16:41:21
|
primefaces/primeng
|
https://api.github.com/repos/primefaces/primeng
|
closed
|
p-calendar closes after selecting a date. It should still be open when you have to also select time.
|
defect
|
### Describe the bug
It appears as solved on
https://github.com/primefaces/primeng/issues/3584#event-1301230693
But I am still having the issue, even on the demo on your site https://www.primefaces.org/primeng/calendar
### Environment
v14
### Reproducer
_No response_
### Angular version
12
### PrimeNG version
14
### Build / Runtime
TypeScript
### Language
ALL
### Node version (for AoT issues node --version)
7+
### Browser(s)
_No response_
### Steps to reproduce the behavior
create a p-calendar with time
choose a date + time,
you will have to open the calendar again to choose time.
### Expected behavior
Calendar should keep opened if you need to select time
|
1.0
|
p-calendar closes after selecting a date. It should still be open when you have to also select time. - ### Describe the bug
It appears as solved on
https://github.com/primefaces/primeng/issues/3584#event-1301230693
But I am still having the issue, even on the demo on your site https://www.primefaces.org/primeng/calendar
### Environment
v14
### Reproducer
_No response_
### Angular version
12
### PrimeNG version
14
### Build / Runtime
TypeScript
### Language
ALL
### Node version (for AoT issues node --version)
7+
### Browser(s)
_No response_
### Steps to reproduce the behavior
create a p-calendar with time
choose a date + time,
you will have to open the calendar again to choose time.
### Expected behavior
Calendar should keep opened if you need to select time
|
defect
|
p calendar closes after selecting a date it should still be open when you have to also select time describe the bug it appears as solved on but i am still having the issue even on the demo on your site environment reproducer no response angular version primeng version build runtime typescript language all node version for aot issues node version browser s no response steps to reproduce the behavior create a p calendar with time choose a date time you will have to open the calendar again to choose time expected behavior calendar should keep opened if you need to select time
| 1
|
628,503
| 19,987,405,551
|
IssuesEvent
|
2022-01-30 21:34:03
|
processing/processing4
|
https://api.github.com/repos/processing/processing4
|
closed
|
'ArrayIndexOutOfBoundsException: Coordinate out of bounds!' when resizing sketch and saving frame
|
lower priority
|
When resizing the sketch really small and saving frames an ArrayIndexOutOfBoundsException is triggered.
```java
void setup() {
surface.setResizable(true);
}
void draw() {
saveFrame();
}
```
alpha 3
OSX 10.15.7
|
1.0
|
'ArrayIndexOutOfBoundsException: Coordinate out of bounds!' when resizing sketch and saving frame - When resizing the sketch really small and saving frames an ArrayIndexOutOfBoundsException is triggered.
```java
void setup() {
surface.setResizable(true);
}
void draw() {
saveFrame();
}
```
alpha 3
OSX 10.15.7
|
non_defect
|
arrayindexoutofboundsexception coordinate out of bounds when resizing sketch and saving frame when resizing the sketch really small and saving frames an arrayindexoutofboundsexception is triggered java void setup surface setresizable true void draw saveframe alpha osx
| 0
|
614,490
| 19,184,183,792
|
IssuesEvent
|
2021-12-04 23:09:55
|
brave/brave-browser
|
https://api.github.com/repos/brave/brave-browser
|
closed
|
Filter out ERC721 tokens for MVP
|
priority/P3 QA/No release-notes/exclude feature/wallet OS/Android
|
We are going to add support of ERC721 tokens after MVP. Please filter them out right now.
|
1.0
|
Filter out ERC721 tokens for MVP - We are going to add support of ERC721 tokens after MVP. Please filter them out right now.
|
non_defect
|
filter out tokens for mvp we are going to add support of tokens after mvp please filter them out right now
| 0
|
77,153
| 26,807,394,598
|
IssuesEvent
|
2023-02-01 19:23:24
|
dkfans/keeperfx
|
https://api.github.com/repos/dkfans/keeperfx
|
closed
|
Pick op gold hoard shortcut desyncs multiplayer
|
Priority-Medium Type-Defect Component-Network Component-Input
|
Hold Ctrl when picking up a gold hoard in your treasure room during multiplayer, and this will briefly resync your game.
|
1.0
|
Pick op gold hoard shortcut desyncs multiplayer - Hold Ctrl when picking up a gold hoard in your treasure room during multiplayer, and this will briefly resync your game.
|
defect
|
pick op gold hoard shortcut desyncs multiplayer hold ctrl when picking up a gold hoard in your treasure room during multiplayer and this will briefly resync your game
| 1
|
699,053
| 24,002,373,064
|
IssuesEvent
|
2022-09-14 12:30:10
|
zowe/api-layer
|
https://api.github.com/repos/zowe/api-layer
|
closed
|
Autofix styling errors in build
|
enhancement good first issue Priority: Medium technical excellence
|
**Is your feature request related to a problem? Please describe.**
As a developer, I am annoyed and slowed down when a build fails due to formatting issue like styling or a missing license header.
**Describe the solution you'd like**
Gradle task to be added that will auto-format, auto-add missing licenses, etc. to resolve as many issues as possible that `checkstyleTest` can throw.
**Describe alternatives you've considered**
Keep pushing extra commits that add one space
|
1.0
|
Autofix styling errors in build - **Is your feature request related to a problem? Please describe.**
As a developer, I am annoyed and slowed down when a build fails due to formatting issue like styling or a missing license header.
**Describe the solution you'd like**
Gradle task to be added that will auto-format, auto-add missing licenses, etc. to resolve as many issues as possible that `checkstyleTest` can throw.
**Describe alternatives you've considered**
Keep pushing extra commits that add one space
|
non_defect
|
autofix styling errors in build is your feature request related to a problem please describe as a developer i am annoyed and slowed down when a build fails due to formatting issue like styling or a missing license header describe the solution you d like gradle task to be added that will auto format auto add missing licenses etc to resolve as many issues as possible that checkstyletest can throw describe alternatives you ve considered keep pushing extra commits that add one space
| 0
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.