Unnamed: 0 int64 1 832k | id float64 2.49B 32.1B | type stringclasses 1 value | created_at stringlengths 19 19 | repo stringlengths 7 112 | repo_url stringlengths 36 141 | action stringclasses 3 values | title stringlengths 3 438 | labels stringlengths 4 308 | body stringlengths 7 254k | index stringclasses 7 values | text_combine stringlengths 96 254k | label stringclasses 2 values | text stringlengths 96 246k | binary_label int64 0 1 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
69,584 | 22,550,278,395 | IssuesEvent | 2022-06-27 04:19:42 | beefproject/beef | https://api.github.com/repos/beefproject/beef | reopened | Can't hook from within Firefox chrome zone | Defect Core Priority Low medium effort | A BeEF hook injected into a Firefox extension will not hook. Fix this.
Blocked on #875
| 1.0 | Can't hook from within Firefox chrome zone - A BeEF hook injected into a Firefox extension will not hook. Fix this.
Blocked on #875
| non_main | can t hook from within firefox chrome zone a beef hook injected into a firefox extension will not hook fix this blocked on | 0 |
946 | 4,677,105,305 | IssuesEvent | 2016-10-07 14:10:10 | ansible/ansible-modules-extras | https://api.github.com/repos/ansible/ansible-modules-extras | closed | firewalld fails, if firewall is not started | affects_2.1 feature_idea waiting_on_maintainer | It would be desired that the ansible firewalld module can add a rule even if the firewall is down/not started at the moment.
Here is my use case:
```
-bash-4.2# systemctl stop firewalld
-bash-4.2# ansible-playbook ~/provision/test.yml -i ~/provision/hosts --connection=local
PLAY [local] ******************************************************************
GATHERING FACTS ***************************************************************
ok: [localhost]
TASK: [test | Firewall settings] **********************************************
failed: [localhost] => {"failed": true, "parsed": false}
failed=True msg='failed to connect to the firewalld daemon'
FATAL: all hosts have already failed -- aborting
PLAY RECAP ********************************************************************
to retry, use: --limit @/root/test.retry
localhost : ok=1 changed=0 unreachable=0 failed=1
```
The task looks like this:
```
- name: Firewall settings
firewalld: zone=public port=5000/tcp permanent=true state=enabled
``` | True | firewalld fails, if firewall is not started - It would be desired that the ansible firewalld module can add a rule even if the firewall is down/not started at the moment.
Here is my use case:
```
-bash-4.2# systemctl stop firewalld
-bash-4.2# ansible-playbook ~/provision/test.yml -i ~/provision/hosts --connection=local
PLAY [local] ******************************************************************
GATHERING FACTS ***************************************************************
ok: [localhost]
TASK: [test | Firewall settings] **********************************************
failed: [localhost] => {"failed": true, "parsed": false}
failed=True msg='failed to connect to the firewalld daemon'
FATAL: all hosts have already failed -- aborting
PLAY RECAP ********************************************************************
to retry, use: --limit @/root/test.retry
localhost : ok=1 changed=0 unreachable=0 failed=1
```
The task looks like this:
```
- name: Firewall settings
firewalld: zone=public port=5000/tcp permanent=true state=enabled
``` | main | firewalld fails if firewall is not started it would be desired that the ansible firewalld module can add a rule even if the firewall is down not started at the moment here is my use case bash systemctl stop firewalld bash ansible playbook provision test yml i provision hosts connection local play gathering facts ok task failed failed true parsed false failed true msg failed to connect to the firewalld daemon fatal all hosts have already failed aborting play recap to retry use limit root test retry localhost ok changed unreachable failed the task looks like this name firewall settings firewalld zone public port tcp permanent true state enabled | 1 |
112,538 | 24,290,842,896 | IssuesEvent | 2022-09-29 05:45:47 | heclak/community-a4e-c | https://api.github.com/repos/heclak/community-a4e-c | closed | S_EVENT_SHOT , event id == 1 does not trigger/work in multiplayer server | Feature Request Code/LUA Need Research Multiplayer good first issue | S_EVENT_SHOT , event id == 1 does not trigger/work in multiplayer server for A-4 only, but works in single player.
**To Reproduce**
Steps to reproduce the behavior:
Hookup S_EVENT_SHOT or event id == 1 script (MIST required) for event setup.
- Fire any weapon in multiplayer standalone server and event id == 1 will no trigger/work.
**Expected behavior**
S_EVENT_SHOT or event id == 1 is supposed to trigger and upon event whatever action you set will execute
**Software Information (please complete the following information):**
- DCS Version: [e.g. 2.7.17.29140 Open Beta Dedicated Server]
- A-4E (all versions ... tried multiple versions)
**Additional context**
I tried other events event id == 2, 14 and they all work in multiplayer.
| 1.0 | S_EVENT_SHOT , event id == 1 does not trigger/work in multiplayer server - S_EVENT_SHOT , event id == 1 does not trigger/work in multiplayer server for A-4 only, but works in single player.
**To Reproduce**
Steps to reproduce the behavior:
Hookup S_EVENT_SHOT or event id == 1 script (MIST required) for event setup.
- Fire any weapon in multiplayer standalone server and event id == 1 will no trigger/work.
**Expected behavior**
S_EVENT_SHOT or event id == 1 is supposed to trigger and upon event whatever action you set will execute
**Software Information (please complete the following information):**
- DCS Version: [e.g. 2.7.17.29140 Open Beta Dedicated Server]
- A-4E (all versions ... tried multiple versions)
**Additional context**
I tried other events event id == 2, 14 and they all work in multiplayer.
| non_main | s event shot event id does not trigger work in multiplayer server s event shot event id does not trigger work in multiplayer server for a only but works in single player to reproduce steps to reproduce the behavior hookup s event shot or event id script mist required for event setup fire any weapon in multiplayer standalone server and event id will no trigger work expected behavior s event shot or event id is supposed to trigger and upon event whatever action you set will execute software information please complete the following information dcs version a all versions tried multiple versions additional context i tried other events event id and they all work in multiplayer | 0 |
604,614 | 18,715,526,438 | IssuesEvent | 2021-11-03 03:43:47 | dotnet/machinelearning-modelbuilder | https://api.github.com/repos/dotnet/machinelearning-modelbuilder | closed | The JSON-RPC connection with the remote party was lost before the request could complete. | Priority:0 Ship Blocker | **Model Builder Version**: Latest main
**Visual Studion Version**: _In which Visual Studio version was the bug encountered?_
**Bug description**
_A clear and concise description of what the bug is._
**Steps to Reproduce**
1. _List the minimal steps required to reproduce the bug._
2. _Thanks for reporting!_
**Expected Experience**
_A description of what you expected to happen. If applicable, add screenshots or "Machine Learning" Output Logs to help explain what you expected._
**Actual Experience**
_A description of what actually happens. If applicable, add screenshots or "Machine Learning" Output Logs to help explain what actually happened._
**Additional Context*
This issue happens for Image classification, OD scenarios, and Recommendation scenarios
| 1.0 | The JSON-RPC connection with the remote party was lost before the request could complete. - **Model Builder Version**: Latest main
**Visual Studion Version**: _In which Visual Studio version was the bug encountered?_
**Bug description**
_A clear and concise description of what the bug is._
**Steps to Reproduce**
1. _List the minimal steps required to reproduce the bug._
2. _Thanks for reporting!_
**Expected Experience**
_A description of what you expected to happen. If applicable, add screenshots or "Machine Learning" Output Logs to help explain what you expected._
**Actual Experience**
_A description of what actually happens. If applicable, add screenshots or "Machine Learning" Output Logs to help explain what actually happened._
**Additional Context*
This issue happens for Image classification, OD scenarios, and Recommendation scenarios
| non_main | the json rpc connection with the remote party was lost before the request could complete model builder version latest main visual studion version in which visual studio version was the bug encountered bug description a clear and concise description of what the bug is steps to reproduce list the minimal steps required to reproduce the bug thanks for reporting expected experience a description of what you expected to happen if applicable add screenshots or machine learning output logs to help explain what you expected actual experience description of what actually happens if applicable add screenshots or machine learning output logs to help explain what actually happened additional context this issue happens for image classification od scenarios and recommendation scenarios | 0 |
74,613 | 20,253,316,728 | IssuesEvent | 2022-02-14 20:13:09 | bitcoin/bitcoin | https://api.github.com/repos/bitcoin/bitcoin | closed | ARMv8 sha2 support | Build system Android | I assume #13191 make this less hard, although benefits may be small. Brief [chat on IRC](https://botbot.me/freenode/bitcoin-core-dev/2018-06-05/?msg=100812298&page=2):
Me
> While trying to get bitcoind to run on one the many *-pi's out there, I wondered: has anyone ever tried to design a system on chip that's optimal for this?
@laanwj:
> provoostenator: you mean secp256k1 specific instructions? people have been thingking about it, could be done on a FPGA, but I don't think it's ever been done
Me:
> echeveria seems to believe sha256 is the bottleneck (see #bitcoin), but also that anything outside the CPU would be too slow I/O to be worh it.
echeveria:
> I looked at the Zynq combination FPGA / ARM devices a long time ago and came to the conclusion that the copy time even on the shared memory bus between the two chips would make it non viable. I'd enjoy being proved wrong though.
laanwj:
> provoostenator: well sha256 extension instructions exist for ARM (supported on newer SoCs), I intend to add support for them at some point. But I would be surprised if that is the biggest bottleneck in validation.
echeveria:
> yes, if there is high-bandwidth communication between two chpis that tends to dominate. I was > thinking of, say, RiscV extensions for secp256k1 validation so it's in-core.
> for ARM it's somewhat unlikely at this time
I have (at least) three devices to test this on, which all have 4 to 8 ARM Cortex-A53 cores, and 1- 4 GB RAM: an Android Xiaomi A1 ([ABCore](https://github.com/greenaddress/abcore) syncs the whole chain in less than a month), a NanoPi Neo Plus and a Khadas VIM2 Max.
Maybe this c++ code is useful: https://github.com/randombit/botan/issues/841 | 1.0 | ARMv8 sha2 support - I assume #13191 make this less hard, although benefits may be small. Brief [chat on IRC](https://botbot.me/freenode/bitcoin-core-dev/2018-06-05/?msg=100812298&page=2):
Me
> While trying to get bitcoind to run on one the many *-pi's out there, I wondered: has anyone ever tried to design a system on chip that's optimal for this?
@laanwj:
> provoostenator: you mean secp256k1 specific instructions? people have been thingking about it, could be done on a FPGA, but I don't think it's ever been done
Me:
> echeveria seems to believe sha256 is the bottleneck (see #bitcoin), but also that anything outside the CPU would be too slow I/O to be worh it.
echeveria:
> I looked at the Zynq combination FPGA / ARM devices a long time ago and came to the conclusion that the copy time even on the shared memory bus between the two chips would make it non viable. I'd enjoy being proved wrong though.
laanwj:
> provoostenator: well sha256 extension instructions exist for ARM (supported on newer SoCs), I intend to add support for them at some point. But I would be surprised if that is the biggest bottleneck in validation.
echeveria:
> yes, if there is high-bandwidth communication between two chpis that tends to dominate. I was > thinking of, say, RiscV extensions for secp256k1 validation so it's in-core.
> for ARM it's somewhat unlikely at this time
I have (at least) three devices to test this on, which all have 4 to 8 ARM Cortex-A53 cores, and 1- 4 GB RAM: an Android Xiaomi A1 ([ABCore](https://github.com/greenaddress/abcore) syncs the whole chain in less than a month), a NanoPi Neo Plus and a Khadas VIM2 Max.
Maybe this c++ code is useful: https://github.com/randombit/botan/issues/841 | non_main | support i assume make this less hard although benefits may be small brief me while trying to get bitcoind to run on one the many pi s out there i wondered has anyone ever tried to design a system on chip that s optimal for this laanwj provoostenator you mean specific instructions people have been thingking about it could be done on a fpga but i don t think it s ever been done me echeveria seems to believe is the bottleneck see bitcoin but also that anything outside the cpu would be too slow i o to be worh it echeveria i looked at the zynq combination fpga arm devices a long time ago and came to the conclusion that the copy time even on the shared memory bus between the two chips would make it non viable i d enjoy being proved wrong though laanwj provoostenator well extension instructions exist for arm supported on newer socs i intend to add support for them at some point but i would be surprised if that is the biggest bottleneck in validation echeveria yes if there is high bandwidth communication between two chpis that tends to dominate i was thinking of say riscv extensions for validation so it s in core for arm it s somewhat unlikely at this time i have at least three devices to test this on which all have to arm cortex cores and gb ram an android xiaomi syncs the whole chain in less than a month a nanopi neo plus and a khadas max maybe this c code is useful | 0 |
3,348 | 12,977,722,539 | IssuesEvent | 2020-07-21 21:12:49 | PowerShell/PowerShell | https://api.github.com/repos/PowerShell/PowerShell | closed | RunspaceInvoke is missing from latest release | Area-SDK Issue-Question Review - Maintainer | ILSpy on the latest release:
https://www.nuget.org/packages/System.Management.Automation/
shows RunspaceInvoke missing
References:
https://docs.microsoft.com/en-us/dotnet/api/system.management.automation.runspaceinvoke?view=powershellsdk-1.1.0
https://github.com/PowerShell/PowerShell/blob/master/src/System.Management.Automation/engine/hostifaces/RunspaceInvoke.cs | True | RunspaceInvoke is missing from latest release - ILSpy on the latest release:
https://www.nuget.org/packages/System.Management.Automation/
shows RunspaceInvoke missing
References:
https://docs.microsoft.com/en-us/dotnet/api/system.management.automation.runspaceinvoke?view=powershellsdk-1.1.0
https://github.com/PowerShell/PowerShell/blob/master/src/System.Management.Automation/engine/hostifaces/RunspaceInvoke.cs | main | runspaceinvoke is missing from latest release ilspy on the latest release shows runspaceinvoke missing references | 1 |
5,041 | 25,841,357,763 | IssuesEvent | 2022-12-13 00:47:43 | ElasticPerch/websocket | https://api.github.com/repos/ElasticPerch/websocket | opened | [bug] Manually passed `Cookie` header overrides `http.CookieJar` cookies | waiting on new maintainer feature request | From websocket created by [yauheni-chaburanau](https://github.com/yauheni-chaburanau): gorilla/websocket#597
**Description**
If you manually pass `Cookie` header in `DialContext(..., http.Header)`, cookies from `Dialer.Jar` will be overwritten.
**Steps to Reproduce**
```go
dialer := websocket.Dialer{
Jar: jar,
}
header := http.Header{}
header.Set("Cookie", "some_cookie_name=some_cookie_value")
... = dialer.DialContext(ctx, url, header)
```
**Possible reason**
From the first look I would say that this is happening because [the part of code which is responsible for setting up all the passed headers](https://github.com/gorilla/websocket/blob/master/client.go#L207) ignores [already applied `Cookie` header from `http.CookieJar`](https://github.com/gorilla/websocket/blob/master/client.go#L190). | True | [bug] Manually passed `Cookie` header overrides `http.CookieJar` cookies - From websocket created by [yauheni-chaburanau](https://github.com/yauheni-chaburanau): gorilla/websocket#597
**Description**
If you manually pass `Cookie` header in `DialContext(..., http.Header)`, cookies from `Dialer.Jar` will be overwritten.
**Steps to Reproduce**
```go
dialer := websocket.Dialer{
Jar: jar,
}
header := http.Header{}
header.Set("Cookie", "some_cookie_name=some_cookie_value")
... = dialer.DialContext(ctx, url, header)
```
**Possible reason**
From the first look I would say that this is happening because [the part of code which is responsible for setting up all the passed headers](https://github.com/gorilla/websocket/blob/master/client.go#L207) ignores [already applied `Cookie` header from `http.CookieJar`](https://github.com/gorilla/websocket/blob/master/client.go#L190). | main | manually passed cookie header overrides http cookiejar cookies from websocket created by gorilla websocket description if you manually pass cookie header in dialcontext http header cookies from dialer jar will be overwritten steps to reproduce go dialer websocket dialer jar jar header http header header set cookie some cookie name some cookie value dialer dialcontext ctx url header possible reason from the first look i would say that this is happening because ignores | 1 |
5,733 | 30,314,520,776 | IssuesEvent | 2023-07-10 14:45:50 | centerofci/mathesar | https://api.github.com/repos/centerofci/mathesar | opened | Confusing timezone issue when editing Time cells | type: enhancement work: frontend status: ready restricted: maintainers | Mathesar appears so have some sort of timezone-related logic at play in the screencap below, but it't not immediately clear to me what is happening.
https://github.com/centerofci/mathesar/assets/42411/53b55916-0e8b-4049-92e0-4d6f1ded2b7e
1. I'm currently in timezone UTC-4:00.
1. I've made a brand new Time column and entered a value into a new row, specifying my time as `8:50`.
1. After saving, Mathesar displayed that value back to me as `4:50`.
1. Given that I never told Mathesar about anything related to timezones, and that Mathesar never told me about anything related to timezones, this discrepancy in value is surprising.
I'm not sure what work is required to fix this problem because I don't understand our product design around Time cells deeply enough. Perhaps the behavior I experienced is our expected behavior? If so, then we need to do a better job communicating this behavior to users.
| True | Confusing timezone issue when editing Time cells - Mathesar appears so have some sort of timezone-related logic at play in the screencap below, but it't not immediately clear to me what is happening.
https://github.com/centerofci/mathesar/assets/42411/53b55916-0e8b-4049-92e0-4d6f1ded2b7e
1. I'm currently in timezone UTC-4:00.
1. I've made a brand new Time column and entered a value into a new row, specifying my time as `8:50`.
1. After saving, Mathesar displayed that value back to me as `4:50`.
1. Given that I never told Mathesar about anything related to timezones, and that Mathesar never told me about anything related to timezones, this discrepancy in value is surprising.
I'm not sure what work is required to fix this problem because I don't understand our product design around Time cells deeply enough. Perhaps the behavior I experienced is our expected behavior? If so, then we need to do a better job communicating this behavior to users.
| main | confusing timezone issue when editing time cells mathesar appears so have some sort of timezone related logic at play in the screencap below but it t not immediately clear to me what is happening i m currently in timezone utc i ve made a brand new time column and entered a value into a new row specifying my time as after saving mathesar displayed that value back to me as given that i never told mathesar about anything related to timezones and that mathesar never told me about anything related to timezones this discrepancy in value is surprising i m not sure what work is required to fix this problem because i don t understand our product design around time cells deeply enough perhaps the behavior i experienced is our expected behavior if so then we need to do a better job communicating this behavior to users | 1 |
33,454 | 7,127,211,889 | IssuesEvent | 2018-01-20 19:07:49 | jccastillo0007/eFacturaT | https://api.github.com/repos/jccastillo0007/eFacturaT | opened | CORREGIR LA MANERA DE REPORTAR EL IVA EXENTO EN WEB Y EN ESCRITORIO | bug defect | SOLO PARA EL CASO, DONDE LA FACTURA INCLUYA ÚNICAMENTE TRASLADO-IVA-EXENTO.
SE REPORTA EL NODO DE IMPUESTOS A NIVEL CONCEPTO, PERO A NIVEL GENERAL NO SE REPORTA EL NODO.
<cfdi:Emisor Rfc="FOPR681125BQ0" Nombre="RIGOBERTO MISAEL FLORES DE LA PAZ" RegimenFiscal="612"></cfdi:Emisor>
<cfdi:Receptor Rfc="TIA101020BP4" Nombre="Tecnologias de Informacion Aplicada SA de CV" UsoCFDI="G03"></cfdi:Receptor>
<cfdi:Conceptos>
<cfdi:Concepto ClaveProdServ="01010101" NoIdentificacion="001" Cantidad="1" ClaveUnidad="E48" Unidad="servicio" Descripcion="renta de equipo de audio" ValorUnitario="10.00" Importe="10.00">
<cfdi:Impuestos>
<cfdi:Traslados>
<cfdi:Traslado Base="10.00" Impuesto="002" TipoFactor="Exento"></cfdi:Traslado>
</cfdi:Traslados>
</cfdi:Impuestos>
</cfdi:Concepto>
</cfdi:Conceptos>
<cfdi:Complemento>
<tfd:TimbreFiscalDigital xmlns:tfd="http://www.sat.gob.mx/TimbreFiscalDigital" xsi:schemaLocation="http://www.sat.gob.mx/TimbreFiscalDigital http://www.sat.gob.mx/sitio_internet/cfd/TimbreFiscalDigital/TimbreFiscalDigitalv11.xsd" Version="1.1" UUID="CA9BFFA6-5F2C-43B9-B3C5-B8E0374A11E8" FechaTimbrado="2018-01-20T12:36:04" RfcProvCertif="SAT970701NN3" SelloCFD="dWXJJT2R6Q7hHPPCy/tJhzrGnDtpHplcUDiIGQ0dkExSsXNpcIEQ5bbSf4zPZ887xkthKMoDAmyXI0oPCQzdhzYrRMeXIEAvPKoTBJPBEmfcNBDuEEbPel0opJTg4NZavn+LwivvjLVPVIeKNtboUUvJWj/Byzld/skXgV3hdTwBguzRlWz/nvL4f7wVWYwqZP+NokmbYgOYP8/hEDg+Ef4BNTAkV/SlGI/y4+yJDSs+GJf9535yraWkRPBNfI+noxcIN2rvQxXHlsV6SmElXHLX7Erm77camCRqG34DMn7SZTdGLiF7j7PXOIkN9VT9dRfZk58m3/4gjbwFD1Xufg==" NoCertificadoSAT="00001000000403258748" SelloSAT="ABTsiTLkS6gm9nTSFYm9vpDmeSJ7bc6gdbfcjYKd2CA6PNPXmrfNm+V5T6SbAQw3WdjUKZX3qk2BpjCgcILAskXzOIIPJqjlZVE0MGaQ8WZTo4HzwlPnie0UuWbZ9Vu+H2ythzmUvo8FEAQ25ov3PY1EbolepyQpVcGLPE8N+Vl4Z7gFVNsKNruDj3jeF5BZpqbCXlHYuykKmP0gjGU0J86iVUwKLZ+dUYME+GU76tVlmj+C+CsNNpXOB4eJfvGfQdtbJ4vvdjJc3ZULKIdk+VLY0/VfklOIIdaZ5J1lNJYq+AAdXwJzgl+3ffMOYIRnq9iiAFOcqBJHj3Y4QMAivw==" />
</cfdi:Complemento>
</cfdi:Comprobante> | 1.0 | CORREGIR LA MANERA DE REPORTAR EL IVA EXENTO EN WEB Y EN ESCRITORIO - SOLO PARA EL CASO, DONDE LA FACTURA INCLUYA ÚNICAMENTE TRASLADO-IVA-EXENTO.
SE REPORTA EL NODO DE IMPUESTOS A NIVEL CONCEPTO, PERO A NIVEL GENERAL NO SE REPORTA EL NODO.
<cfdi:Emisor Rfc="FOPR681125BQ0" Nombre="RIGOBERTO MISAEL FLORES DE LA PAZ" RegimenFiscal="612"></cfdi:Emisor>
<cfdi:Receptor Rfc="TIA101020BP4" Nombre="Tecnologias de Informacion Aplicada SA de CV" UsoCFDI="G03"></cfdi:Receptor>
<cfdi:Conceptos>
<cfdi:Concepto ClaveProdServ="01010101" NoIdentificacion="001" Cantidad="1" ClaveUnidad="E48" Unidad="servicio" Descripcion="renta de equipo de audio" ValorUnitario="10.00" Importe="10.00">
<cfdi:Impuestos>
<cfdi:Traslados>
<cfdi:Traslado Base="10.00" Impuesto="002" TipoFactor="Exento"></cfdi:Traslado>
</cfdi:Traslados>
</cfdi:Impuestos>
</cfdi:Concepto>
</cfdi:Conceptos>
<cfdi:Complemento>
<tfd:TimbreFiscalDigital xmlns:tfd="http://www.sat.gob.mx/TimbreFiscalDigital" xsi:schemaLocation="http://www.sat.gob.mx/TimbreFiscalDigital http://www.sat.gob.mx/sitio_internet/cfd/TimbreFiscalDigital/TimbreFiscalDigitalv11.xsd" Version="1.1" UUID="CA9BFFA6-5F2C-43B9-B3C5-B8E0374A11E8" FechaTimbrado="2018-01-20T12:36:04" RfcProvCertif="SAT970701NN3" SelloCFD="dWXJJT2R6Q7hHPPCy/tJhzrGnDtpHplcUDiIGQ0dkExSsXNpcIEQ5bbSf4zPZ887xkthKMoDAmyXI0oPCQzdhzYrRMeXIEAvPKoTBJPBEmfcNBDuEEbPel0opJTg4NZavn+LwivvjLVPVIeKNtboUUvJWj/Byzld/skXgV3hdTwBguzRlWz/nvL4f7wVWYwqZP+NokmbYgOYP8/hEDg+Ef4BNTAkV/SlGI/y4+yJDSs+GJf9535yraWkRPBNfI+noxcIN2rvQxXHlsV6SmElXHLX7Erm77camCRqG34DMn7SZTdGLiF7j7PXOIkN9VT9dRfZk58m3/4gjbwFD1Xufg==" NoCertificadoSAT="00001000000403258748" SelloSAT="ABTsiTLkS6gm9nTSFYm9vpDmeSJ7bc6gdbfcjYKd2CA6PNPXmrfNm+V5T6SbAQw3WdjUKZX3qk2BpjCgcILAskXzOIIPJqjlZVE0MGaQ8WZTo4HzwlPnie0UuWbZ9Vu+H2ythzmUvo8FEAQ25ov3PY1EbolepyQpVcGLPE8N+Vl4Z7gFVNsKNruDj3jeF5BZpqbCXlHYuykKmP0gjGU0J86iVUwKLZ+dUYME+GU76tVlmj+C+CsNNpXOB4eJfvGfQdtbJ4vvdjJc3ZULKIdk+VLY0/VfklOIIdaZ5J1lNJYq+AAdXwJzgl+3ffMOYIRnq9iiAFOcqBJHj3Y4QMAivw==" />
</cfdi:Complemento>
</cfdi:Comprobante> | non_main | corregir la manera de reportar el iva exento en web y en escritorio solo para el caso donde la factura incluya únicamente traslado iva exento se reporta el nodo de impuestos a nivel concepto pero a nivel general no se reporta el nodo | 0 |
242,955 | 18,674,584,948 | IssuesEvent | 2021-10-31 10:37:34 | codezonediitj/pydatastructs | https://api.github.com/repos/codezonediitj/pydatastructs | closed | Add note in Graph doc string for clarifying adding nodes and edges | documentation enhancement graphs | #### Description of the problem
<!--Please provide a clear and details information of the bug/data structure to be added.-->
The correct way to create a graph is to first create nodes of the right type (`AdjacencyListGraphNode` or `AdjacencyListMatrixNode`) and then add these nodes to the graph. Once done, add edges between these nodes.
1. A note should be added in the class doc string for the above process.
2. In the doc string of `add_edge` it should be clarified that this function will assume that the nodes are already present in the graph. If they are not present, then this function will not add the new nodes on it's own. In case someone attempts to do that then a nice error message should be raised describing the same.
#### Example of the problem
<!--Provide a reproducible example code which is causing the bug to appear. Leave this section if the problem is not a bug.-->
#### References/Other comments
cc: @pratikgl | 1.0 | Add note in Graph doc string for clarifying adding nodes and edges - #### Description of the problem
<!--Please provide a clear and details information of the bug/data structure to be added.-->
The correct way to create a graph is to first create nodes of the right type (`AdjacencyListGraphNode` or `AdjacencyListMatrixNode`) and then add these nodes to the graph. Once done, add edges between these nodes.
1. A note should be added in the class doc string for the above process.
2. In the doc string of `add_edge` it should be clarified that this function will assume that the nodes are already present in the graph. If they are not present, then this function will not add the new nodes on it's own. In case someone attempts to do that then a nice error message should be raised describing the same.
#### Example of the problem
<!--Provide a reproducible example code which is causing the bug to appear. Leave this section if the problem is not a bug.-->
#### References/Other comments
cc: @pratikgl | non_main | add note in graph doc string for clarifying adding nodes and edges description of the problem the correct way to create a graph is to first create nodes of the right type adjacencylistgraphnode or adjacencylistmatrixnode and then add these nodes to the graph once done add edges between these nodes a note should be added in the class doc string for the above process in the doc string of add edge it should be clarified that this function will assume that the nodes are already present in the graph if they are not present then this function will not add the new nodes on it s own in case someone attempts to do that then a nice error message should be raised describing the same example of the problem references other comments cc pratikgl | 0 |
2,284 | 8,132,816,890 | IssuesEvent | 2018-08-18 16:41:06 | openwrt/packages | https://api.github.com/repos/openwrt/packages | closed | wifidog init script executing wifidog too early | waiting for maintainer | I've been testing wifidog with openwrt on several of the ubiquiti air-routers placed at different locations, cafe's hotels etc... I've found that the default init script with procd starts wifidog too early and this causes problems. Although wifidog still functions and behaves normally, memory is gradually leaked until wifidog crashes because it cant malloc anymore, either that or the router needs hard rebooting.
Removing procd from the init script and adding "wifidog -c /etc/myconfig.conf" on its own without procd doesn't work at all and wifidog stops shortly after executing because the vlan interface its set to listen on is not up yet. With procd, wifidog is somehow able to start even though the vlan listen interface is not up.
Starting wifidog manually via ssh after the router boots doesn't exhibit any memory problems when left running for an extended period of time with many clients connecting and logging in.
Changing START=65 to 95 and adding "sleep 20" into the wifidog init script with or without procd fixes the problem with starting too early and also the memory leak.
I don't know if this problem occurs on other hardware running openwrt as I haven't tested any in heavy use environments like Hotels, just the air-router.
| True | wifidog init script executing wifidog too early - I've been testing wifidog with openwrt on several of the ubiquiti air-routers placed at different locations, cafe's hotels etc... I've found that the default init script with procd starts wifidog too early and this causes problems. Although wifidog still functions and behaves normally, memory is gradually leaked until wifidog crashes because it cant malloc anymore, either that or the router needs hard rebooting.
Removing procd from the init script and adding "wifidog -c /etc/myconfig.conf" on its own without procd doesn't work at all and wifidog stops shortly after executing because the vlan interface its set to listen on is not up yet. With procd, wifidog is somehow able to start even though the vlan listen interface is not up.
Starting wifidog manually via ssh after the router boots doesn't exhibit any memory problems when left running for an extended period of time with many clients connecting and logging in.
Changing START=65 to 95 and adding "sleep 20" into the wifidog init script with or without procd fixes the problem with starting too early and also the memory leak.
I don't know if this problem occurs on other hardware running openwrt as I haven't tested any in heavy use environments like Hotels, just the air-router.
| main | wifidog init script executing wifidog too early i ve been testing wifidog with openwrt on several of the ubiquiti air routers placed at different locations cafe s hotels etc i ve found that the default init script with procd starts wifidog too early and this causes problems although wifidog still functions and behaves normally memory is gradually leaked until wifidog crashes because it cant malloc anymore either that or the router needs hard rebooting removing procd from the init script and adding wifidog c etc myconfig conf on its own without procd doesn t work at all and wifidog stops shortly after executing because the vlan interface its set to listen on is not up yet with procd wifidog is somehow able to start even though the vlan listen interface is not up starting wifidog manually via ssh after the router boots doesn t exhibit any memory problems when left running for an extended period of time with many clients connecting and logging in changing start to and adding sleep into the wifidog init script with or without procd fixes the problem with starting too early and also the memory leak i don t know if this problem occurs on other hardware running openwrt as i haven t tested any in heavy use environments like hotels just the air router | 1 |
3,520 | 13,804,304,154 | IssuesEvent | 2020-10-11 08:17:36 | sukritishah15/DS-Algo-Point | https://api.github.com/repos/sukritishah15/DS-Algo-Point | closed | WARNING - MAINTAINERS | maintainers | ## 🚀 We do not appreciate and encourage bossy or rude behaviour with any of our contributors by any maintainer and vice-versa as well.
If there is an issue with any PR, mention it politely. Do not try and act BOSSY.
We will come across a lot of beginners and they will make a lot of mistakes, it is absolutely OKAY. Guide them. Do not YELL or type rude comments.
- MAKING MISTAKES IS NOT SPAMMY.
- Sending repetitive code/content via a PR just to increase the PR count is SPAMMY.
- Making a PR which does not contribute much is SPAMMY.
Also, for the contributors, please be polite with the maintainers under all circumstances.
Let's not forget we are all learners at the end of the day. We all have something to learn.
Keep a learning and growth mindset and exchange healthy, useful and productive conversations.
Enjoy open source at it's core and recognize it for what it is.
P.S. - This post comes across, after I saw rude comments by **one** of the maintainer. **Rest all are doing extremely well.** | True | WARNING - MAINTAINERS - ## 🚀 We do not appreciate and encourage bossy or rude behaviour with any of our contributors by any maintainer and vice-versa as well.
If there is an issue with any PR, mention it politely. Do not try and act BOSSY.
We will come across a lot of beginners and they will make a lot of mistakes, it is absolutely OKAY. Guide them. Do not YELL or type rude comments.
- MAKING MISTAKES IS NOT SPAMMY.
- Sending repetitive code/content via a PR just to increase the PR count is SPAMMY.
- Making a PR which does not contribute much is SPAMMY.
Also, for the contributors, please be polite with the maintainers under all circumstances.
Let's not forget we are all learners at the end of the day. We all have something to learn.
Keep a learning and growth mindset and exchange healthy, useful and productive conversations.
Enjoy open source at it's core and recognize it for what it is.
P.S. - This post comes across, after I saw rude comments by **one** of the maintainer. **Rest all are doing extremely well.** | main | warning maintainers 🚀 we do not appreciate and encourage bossy or rude behaviour with any of our contributors by any maintainer and vice versa as well if there is an issue with any pr mention it politely do not try and act bossy we will come across a lot of beginners and they will make a lot of mistakes it is absolutely okay guide them do not yell or type rude comments making mistakes is not spammy sending repetitive code content via a pr just to increase the pr count is spammy making a pr which does not contribute much is spammy also for the contributors please be polite with the maintainers under all circumstances let s not forget we are all learners at the end of the day we all have something to learn keep a learning and growth mindset and exchange healthy useful and productive conversations enjoy open source at it s core and recognize it for what it is p s this post comes across after i saw rude comments by one of the maintainer rest all are doing extremely well | 1 |
132,393 | 10,745,195,183 | IssuesEvent | 2019-10-30 08:29:20 | DivanteLtd/shopware-pwa | https://api.github.com/repos/DivanteLtd/shopware-pwa | closed | Populate Shopware test instance with pretty data | Test eCommerce SIte | **Context**
Frontend layer does not look well with default gray images. We want to populate Shopware 6 Test Instance with data and images that will cause the frontend layer to look good. That will make the testing process easier. | 1.0 | Populate Shopware test instance with pretty data - **Context**
Frontend layer does not look well with default gray images. We want to populate Shopware 6 Test Instance with data and images that will cause the frontend layer to look good. That will make the testing process easier. | non_main | populate shopware test instance with pretty data context frontend layer does not look well with default gray images we want to populate shopware test instance with data and images that will cause the frontend layer to look good that will make the testing process easier | 0 |
541,005 | 15,820,097,496 | IssuesEvent | 2021-04-05 18:27:57 | litecoin-foundation/loafwallet-ios | https://api.github.com/repos/litecoin-foundation/loafwallet-ios | closed | 🥳[Feature] Reset Litecoin Card password | Priority-Medium enhancement size: 3 | ## Goal
Allow the user the reset their Litecoin Card password from the Login / Card view
## Approach
Using the forgot password endpoint to allow the user to reset their Litecoin Card password using the registered email address
- Endpoint: `PATCH /v1/user/:user_id/password`
- Docs: https://docs.getblockcard.com/docs/api-user-change-password/
## Definition of Done
- [ ] Unit Test written to verify the endpoint
- [ ] Reset accepts the registration email with a modal with a textfield and includes an `ok` and `cancel` button
- [ ] User is able to login with the new password in Litewallet : Card
| 1.0 | 🥳[Feature] Reset Litecoin Card password - ## Goal
Allow the user the reset their Litecoin Card password from the Login / Card view
## Approach
Using the forgot password endpoint to allow the user to reset their Litecoin Card password using the registered email address
- Endpoint: `PATCH /v1/user/:user_id/password`
- Docs: https://docs.getblockcard.com/docs/api-user-change-password/
## Definition of Done
- [ ] Unit Test written to verify the endpoint
- [ ] Reset accepts the registration email with a modal with a textfield and includes an `ok` and `cancel` button
- [ ] User is able to login with the new password in Litewallet : Card
| non_main | 🥳 reset litecoin card password goal allow the user the reset their litecoin card password from the login card view approach using the forgot password endpoint to allow the user to reset their litecoin card password using the registered email address endpoint patch user user id password docs definition of done unit test written to verify the endpoint reset accepts the registration email with a modal with a textfield and includes an ok and cancel button user is able to login with the new password in litewallet card | 0 |
2,264 | 7,961,991,169 | IssuesEvent | 2018-07-13 12:54:47 | RalfKoban/MiKo-Analyzers | https://api.github.com/repos/RalfKoban/MiKo-Analyzers | closed | 'EventArgs' should not have public setters | Area: analyzer Area: maintainability feature in progress | Classes inheriting from `EventArgs` should not provide properties that have publicly visible setters.
Instead these setters shall be `private` to avoid setting from outside. They shall also neither be `protected` nor `internal`.
If these properties shall be set, then there should only be (circuit breaker) methods that allow to set the property once.
Explanation:
The reason is that events are raised and handled by event handlers. If now the event data changes between the handlers, then there is a race condition ongoing as not all handlers get the same result.
If the event shall be canceled, then the setting method shall act as circuit-breaker so that the event can be canceled but then no longer be un-canceled. | True | 'EventArgs' should not have public setters - Classes inheriting from `EventArgs` should not provide properties that have publicly visible setters.
Instead these setters shall be `private` to avoid setting from outside. They shall also neither be `protected` nor `internal`.
If these properties shall be set, then there should only be (circuit breaker) methods that allow to set the property once.
Explanation:
The reason is that events are raised and handled by event handlers. If now the event data changes between the handlers, then there is a race condition ongoing as not all handlers get the same result.
If the event shall be canceled, then the setting method shall act as circuit-breaker so that the event can be canceled but then no longer be un-canceled. | main | eventargs should not have public setters classes inheriting from eventargs should not provide properties that have publicly visible setters instead these setters shall be private to avoid setting from outside they shall also neither be protected nor internal if these properties shall be set then there should only be circuit breaker methods that allow to set the property once explanation the reason is that events are raised and handled by event handlers if now the event data changes between the handlers then there is a race condition ongoing as not all handlers get the same result if the event shall be canceled then the setting method shall act as circuit breaker so that the event can be canceled but then no longer be un canceled | 1 |
271,290 | 29,418,930,371 | IssuesEvent | 2023-05-31 01:03:25 | MidnightBSD/src | https://api.github.com/repos/MidnightBSD/src | reopened | CVE-2022-30699 (Medium) detected in multiple libraries | Mend: dependency security vulnerability | ## CVE-2022-30699 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>hardenedBSDbbfb1edd70e15241d852d82eb7e1c1049a01b886</b>, <b>hardenedBSDbbfb1edd70e15241d852d82eb7e1c1049a01b886</b>, <b>hardenedBSDbbfb1edd70e15241d852d82eb7e1c1049a01b886</b>, <b>hardenedBSDbbfb1edd70e15241d852d82eb7e1c1049a01b886</b>, <b>hardenedBSDbbfb1edd70e15241d852d82eb7e1c1049a01b886</b>, <b>hardenedBSDbbfb1edd70e15241d852d82eb7e1c1049a01b886</b>, <b>hardenedBSDbbfb1edd70e15241d852d82eb7e1c1049a01b886</b></p></summary>
<p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png?' width=19 height=20> Vulnerability Details</summary>
<p>
NLnet Labs Unbound, up to and including version 1.16.1, is vulnerable to a novel type of the "ghost domain names" attack. The vulnerability works by targeting an Unbound instance. Unbound is queried for a rogue domain name when the cached delegation information is about to expire. The rogue nameserver delays the response so that the cached delegation information is expired. Upon receiving the delayed answer containing the delegation information, Unbound overwrites the now expired entries. This action can be repeated when the delegation information is about to expire making the rogue delegation information ever-updating. From version 1.16.2 on, Unbound stores the start time for a query and uses that to decide if the cached delegation information can be overwritten.
<p>Publish Date: 2022-08-01
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2022-30699>CVE-2022-30699</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: None
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-30699">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-30699</a></p>
<p>Release Date: 2022-08-01</p>
<p>Fix Resolution: release-1.16.2</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2022-30699 (Medium) detected in multiple libraries - ## CVE-2022-30699 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>hardenedBSDbbfb1edd70e15241d852d82eb7e1c1049a01b886</b>, <b>hardenedBSDbbfb1edd70e15241d852d82eb7e1c1049a01b886</b>, <b>hardenedBSDbbfb1edd70e15241d852d82eb7e1c1049a01b886</b>, <b>hardenedBSDbbfb1edd70e15241d852d82eb7e1c1049a01b886</b>, <b>hardenedBSDbbfb1edd70e15241d852d82eb7e1c1049a01b886</b>, <b>hardenedBSDbbfb1edd70e15241d852d82eb7e1c1049a01b886</b>, <b>hardenedBSDbbfb1edd70e15241d852d82eb7e1c1049a01b886</b></p></summary>
<p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png?' width=19 height=20> Vulnerability Details</summary>
<p>
NLnet Labs Unbound, up to and including version 1.16.1, is vulnerable to a novel type of the "ghost domain names" attack. The vulnerability works by targeting an Unbound instance. Unbound is queried for a rogue domain name when the cached delegation information is about to expire. The rogue nameserver delays the response so that the cached delegation information is expired. Upon receiving the delayed answer containing the delegation information, Unbound overwrites the now expired entries. This action can be repeated when the delegation information is about to expire making the rogue delegation information ever-updating. From version 1.16.2 on, Unbound stores the start time for a query and uses that to decide if the cached delegation information can be overwritten.
<p>Publish Date: 2022-08-01
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2022-30699>CVE-2022-30699</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: None
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-30699">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-30699</a></p>
<p>Release Date: 2022-08-01</p>
<p>Fix Resolution: release-1.16.2</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_main | cve medium detected in multiple libraries cve medium severity vulnerability vulnerable libraries vulnerability details nlnet labs unbound up to and including version is vulnerable to a novel type of the ghost domain names attack the vulnerability works by targeting an unbound instance unbound is queried for a rogue domain name when the cached delegation information is about to expire the rogue nameserver delays the response so that the cached delegation information is expired upon receiving the delayed answer containing the delegation information unbound overwrites the now expired entries this action can be repeated when the delegation information is about to expire making the rogue delegation information ever updating from version on unbound stores the start time for a query and uses that to decide if the cached delegation information can be overwritten publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required low user interaction none scope unchanged impact metrics confidentiality impact high integrity impact none availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution release step up your open source security game with mend | 0 |
307,975 | 9,424,670,280 | IssuesEvent | 2019-04-11 14:33:26 | python/mypy | https://api.github.com/repos/python/mypy | closed | segmentation fault on indexing object with __getattr__ | crash priority-0-high | Please provide more information to help us understand the issue:
* Are you reporting a bug, or opening a feature request?
Bug.
* Please insert below the code you are checking with mypy,
or a mock-up repro if the source is private. We would appreciate
if you try to simplify your case to a minimal repro.
```python
class C:
def __getattr__(self, name: str) -> 'C':
...
def f(v: C) -> None:
v[0]
```
* What is the actual behavior/output?
segfault
* What is the behavior/output you expect?
not crashing
* What are the versions of mypy and Python you are using?
python 3.6.7 on Ubuntu 18.04.2 x86_64
mypy 0.670 installed march 20 2019 via pip
Do you see the same issue after installing mypy from Git master?
yes
0.680+dev.4e0a1583aeb00b248e187054980771f1897a1d31
* What are the mypy flags you are using? (For example --strict-optional)
none
| 1.0 | segmentation fault on indexing object with __getattr__ - Please provide more information to help us understand the issue:
* Are you reporting a bug, or opening a feature request?
Bug.
* Please insert below the code you are checking with mypy,
or a mock-up repro if the source is private. We would appreciate
if you try to simplify your case to a minimal repro.
```python
class C:
def __getattr__(self, name: str) -> 'C':
...
def f(v: C) -> None:
v[0]
```
* What is the actual behavior/output?
segfault
* What is the behavior/output you expect?
not crashing
* What are the versions of mypy and Python you are using?
python 3.6.7 on Ubuntu 18.04.2 x86_64
mypy 0.670 installed march 20 2019 via pip
Do you see the same issue after installing mypy from Git master?
yes
0.680+dev.4e0a1583aeb00b248e187054980771f1897a1d31
* What are the mypy flags you are using? (For example --strict-optional)
none
| non_main | segmentation fault on indexing object with getattr please provide more information to help us understand the issue are you reporting a bug or opening a feature request bug please insert below the code you are checking with mypy or a mock up repro if the source is private we would appreciate if you try to simplify your case to a minimal repro python class c def getattr self name str c def f v c none v what is the actual behavior output segfault what is the behavior output you expect not crashing what are the versions of mypy and python you are using python on ubuntu mypy installed march via pip do you see the same issue after installing mypy from git master yes dev what are the mypy flags you are using for example strict optional none | 0 |
35,246 | 14,655,666,915 | IssuesEvent | 2020-12-28 11:33:33 | microsoft/vscode-cpptools | https://api.github.com/repos/microsoft/vscode-cpptools | closed | IntelliSense within remote ssh broken | Language Service more info needed remote | **Type:** IntelliSense
**Describe the bug**
- OS and Version: Mac Mojave 10.14.6, remote with Linux 4.12.14-lp151.28.36-default x86_64
- VS Code Version: 1.42.1
- C/C++ Extension Version: 0.26.3
- Other extensions you installed (and if the issue persists after disabling them): C/C++ GNU Global (0.3.2), Visual Studio IntelliCode (1.2.5), Git History (0.6.0)
When using the [remote ssh extension](https://code.visualstudio.com/blogs/2019/07/25/remote-ssh), the IntelliSense works for the first couple of seconds but then has no suggestions for the rest of the session.
**To Reproduce**
<!-- Steps to reproduce the behavior: -->
<!-- *The most actionable issue reports include a code sample including configuration files such as c_cpp_properties.json* -->
1. Get the remote ssh extension
2. Remote into a Linus server
3. Start writing some C code.
4. See the lack of suggestions and autocomplete after a couple of minutes of usage. (error)
**Expected behavior**
I expect there to be IntelliSense working in full functionality in the remote ssh window.
| 1.0 | IntelliSense within remote ssh broken - **Type:** IntelliSense
**Describe the bug**
- OS and Version: Mac Mojave 10.14.6, remote with Linux 4.12.14-lp151.28.36-default x86_64
- VS Code Version: 1.42.1
- C/C++ Extension Version: 0.26.3
- Other extensions you installed (and if the issue persists after disabling them): C/C++ GNU Global (0.3.2), Visual Studio IntelliCode (1.2.5), Git History (0.6.0)
When using the [remote ssh extension](https://code.visualstudio.com/blogs/2019/07/25/remote-ssh), the IntelliSense works for the first couple of seconds but then has no suggestions for the rest of the session.
**To Reproduce**
<!-- Steps to reproduce the behavior: -->
<!-- *The most actionable issue reports include a code sample including configuration files such as c_cpp_properties.json* -->
1. Get the remote ssh extension
2. Remote into a Linus server
3. Start writing some C code.
4. See the lack of suggestions and autocomplete after a couple of minutes of usage. (error)
**Expected behavior**
I expect there to be IntelliSense working in full functionality in the remote ssh window.
| non_main | intellisense within remote ssh broken type intellisense describe the bug os and version mac mojave remote with linux default vs code version c c extension version other extensions you installed and if the issue persists after disabling them c c gnu global visual studio intellicode git history when using the the intellisense works for the first couple of seconds but then has no suggestions for the rest of the session to reproduce get the remote ssh extension remote into a linus server start writing some c code see the lack of suggestions and autocomplete after a couple of minutes of usage error expected behavior i expect there to be intellisense working in full functionality in the remote ssh window | 0 |
2,906 | 10,327,617,632 | IssuesEvent | 2019-09-02 07:28:51 | varenc/homebrew-ffmpeg | https://api.github.com/repos/varenc/homebrew-ffmpeg | closed | Possible improvements, housekeeping | maintainer-feedback | Based on Lou Logan's comments:
- `--enable-avresample` really needed?
- `--enable-librtmp` can be removed
- `--enable-libxvid` can be removed
- `--enable-libspeex` is obsolete, replaced by Opus
- `--enable-hardcoded-tables` – might be time to drop that if it makes no significant difference if someone wants to test
- Figure out how to deal with `libjack` – should it be optional under Linux?
My suggestion would be to:
- [x] Keep librtmp, but make it optional => #15
- [x] Keep libxvid, but make it optional => #13
- [x] Keep libspeex, but make it optional => #14
- [ ] Check usage of `avresample`
- [ ] Add support for `libjack` => #16
- [x] Keep hardcoded tables, but have to further investigate --> done by Reto. | True | Possible improvements, housekeeping - Based on Lou Logan's comments:
- `--enable-avresample` really needed?
- `--enable-librtmp` can be removed
- `--enable-libxvid` can be removed
- `--enable-libspeex` is obsolete, replaced by Opus
- `--enable-hardcoded-tables` – might be time to drop that if it makes no significant difference if someone wants to test
- Figure out how to deal with `libjack` – should it be optional under Linux?
My suggestion would be to:
- [x] Keep librtmp, but make it optional => #15
- [x] Keep libxvid, but make it optional => #13
- [x] Keep libspeex, but make it optional => #14
- [ ] Check usage of `avresample`
- [ ] Add support for `libjack` => #16
- [x] Keep hardcoded tables, but have to further investigate --> done by Reto. | main | possible improvements housekeeping based on lou logan s comments enable avresample really needed enable librtmp can be removed enable libxvid can be removed enable libspeex is obsolete replaced by opus enable hardcoded tables – might be time to drop that if it makes no significant difference if someone wants to test figure out how to deal with libjack – should it be optional under linux my suggestion would be to keep librtmp but make it optional keep libxvid but make it optional keep libspeex but make it optional check usage of avresample add support for libjack keep hardcoded tables but have to further investigate done by reto | 1 |
4,421 | 22,782,699,690 | IssuesEvent | 2022-07-08 22:09:33 | radical-semiconductor/katsu-board-demo | https://api.github.com/repos/radical-semiconductor/katsu-board-demo | opened | DRY out github workflows (ci + releases) | maintainability | just use explicit include to have multiple variables without adding dimensions
include:
- site: "production"
datacenter: "site-a"
- site: "staging"
datacenter: "site-b" | True | DRY out github workflows (ci + releases) - just use explicit include to have multiple variables without adding dimensions
include:
- site: "production"
datacenter: "site-a"
- site: "staging"
datacenter: "site-b" | main | dry out github workflows ci releases just use explicit include to have multiple variables without adding dimensions include site production datacenter site a site staging datacenter site b | 1 |
3,245 | 12,368,707,071 | IssuesEvent | 2020-05-18 14:13:33 | Kashdeya/Tiny-Progressions | https://api.github.com/repos/Kashdeya/Tiny-Progressions | closed | Berry bush generation blacklist | Version not Maintainted | A configurable feature where we can blacklist dimensions for the berry bush generation etc could be useful. eg: berry bushes spawn in the betweenlands dimension which doesn't fit in the dimension. | True | Berry bush generation blacklist - A configurable feature where we can blacklist dimensions for the berry bush generation etc could be useful. eg: berry bushes spawn in the betweenlands dimension which doesn't fit in the dimension. | main | berry bush generation blacklist a configurable feature where we can blacklist dimensions for the berry bush generation etc could be useful eg berry bushes spawn in the betweenlands dimension which doesn t fit in the dimension | 1 |
5,767 | 30,567,421,172 | IssuesEvent | 2023-07-20 18:55:40 | ocbe-uio/trajpy | https://api.github.com/repos/ocbe-uio/trajpy | closed | Remove plot feature from the GUI to reduce dependencies | good first issue maintainability | Currently we have a plotting feature in the GUI. This feature is unnecessary and increase the number of dependencies.
Code that should be **removed**:
- in _trajpy/gui.py_
https://github.com/ocbe-uio/trajpy/blob/2b742349b8f24aab32b39369b47f59302c150913/trajpy/gui.py#L6-L10
https://github.com/ocbe-uio/trajpy/blob/2b742349b8f24aab32b39369b47f59302c150913/trajpy/gui.py#L34
https://github.com/ocbe-uio/trajpy/blob/2b742349b8f24aab32b39369b47f59302c150913/trajpy/gui.py#L110
https://github.com/ocbe-uio/trajpy/blob/2b742349b8f24aab32b39369b47f59302c150913/trajpy/gui.py#L266-L271
- in _requirements.txt_
https://github.com/ocbe-uio/trajpy/blob/2b742349b8f24aab32b39369b47f59302c150913/requirements.txt#L3
After removal we should organise the GUI buttons in a better way. We can reduce the windows size and increase text size for improved accessibility.
| True | Remove plot feature from the GUI to reduce dependencies - Currently we have a plotting feature in the GUI. This feature is unnecessary and increase the number of dependencies.
Code that should be **removed**:
- in _trajpy/gui.py_
https://github.com/ocbe-uio/trajpy/blob/2b742349b8f24aab32b39369b47f59302c150913/trajpy/gui.py#L6-L10
https://github.com/ocbe-uio/trajpy/blob/2b742349b8f24aab32b39369b47f59302c150913/trajpy/gui.py#L34
https://github.com/ocbe-uio/trajpy/blob/2b742349b8f24aab32b39369b47f59302c150913/trajpy/gui.py#L110
https://github.com/ocbe-uio/trajpy/blob/2b742349b8f24aab32b39369b47f59302c150913/trajpy/gui.py#L266-L271
- in _requirements.txt_
https://github.com/ocbe-uio/trajpy/blob/2b742349b8f24aab32b39369b47f59302c150913/requirements.txt#L3
After removal we should organise the GUI buttons in a better way. We can reduce the windows size and increase text size for improved accessibility.
| main | remove plot feature from the gui to reduce dependencies currently we have a plotting feature in the gui this feature is unnecessary and increase the number of dependencies code that should be removed in trajpy gui py in requirements txt after removal we should organise the gui buttons in a better way we can reduce the windows size and increase text size for improved accessibility | 1 |
1,752 | 6,574,969,094 | IssuesEvent | 2017-09-11 14:38:47 | ansible/ansible-modules-core | https://api.github.com/repos/ansible/ansible-modules-core | closed | declare a "port" parameter for os_security_group_rule | affects_2.3 cloud feature_idea openstack waiting_on_maintainer | ##### ISSUE TYPE
- Feature Idea
##### COMPONENT NAME
os_security_group_rule
I end up with a lot of "os_security_group_rule:" statements that have:
port_range_max: "{{ item }}"
port_range_min: "{{ item }}"
For rules where only a single port is needed. Is there some reason not to define a "port:" parameter that if that has been passed set port_range_min and port_range_max to that value?
That should pass the right things down the chain through shade to openstack.
| True | declare a "port" parameter for os_security_group_rule - ##### ISSUE TYPE
- Feature Idea
##### COMPONENT NAME
os_security_group_rule
I end up with a lot of "os_security_group_rule:" statements that have:
port_range_max: "{{ item }}"
port_range_min: "{{ item }}"
For rules where only a single port is needed. Is there some reason not to define a "port:" parameter that if that has been passed set port_range_min and port_range_max to that value?
That should pass the right things down the chain through shade to openstack.
| main | declare a port parameter for os security group rule issue type feature idea component name os security group rule i end up with a lot of os security group rule statements that have port range max item port range min item for rules where only a single port is needed is there some reason not to define a port parameter that if that has been passed set port range min and port range max to that value that should pass the right things down the chain through shade to openstack | 1 |
383,722 | 26,562,627,931 | IssuesEvent | 2023-01-20 17:06:58 | unibz-core/Scior-Tester | https://api.github.com/repos/unibz-core/Scior-Tester | opened | Review and update execution instruction documentations | documentation | @mozzherina, please review and update the execution instruction documentations:
1) https://github.com/unibz-core/Scior-Tester/blob/main/documentation/Scior-Tester-Build.md#execution-instructions
2) https://github.com/unibz-core/Scior-Tester/blob/main/documentation/Scior-Tester-Test1.md#execution-instructions
3) https://github.com/unibz-core/Scior-Tester/blob/main/documentation/Scior-Tester-Test2.md#execution-instructions | 1.0 | Review and update execution instruction documentations - @mozzherina, please review and update the execution instruction documentations:
1) https://github.com/unibz-core/Scior-Tester/blob/main/documentation/Scior-Tester-Build.md#execution-instructions
2) https://github.com/unibz-core/Scior-Tester/blob/main/documentation/Scior-Tester-Test1.md#execution-instructions
3) https://github.com/unibz-core/Scior-Tester/blob/main/documentation/Scior-Tester-Test2.md#execution-instructions | non_main | review and update execution instruction documentations mozzherina please review and update the execution instruction documentations | 0 |
694,799 | 23,830,352,243 | IssuesEvent | 2022-09-05 19:50:25 | themotte/rDrama | https://api.github.com/repos/themotte/rDrama | opened | 2FA may be malfunctioning via Google Authenticator? | bug P2 priority | So far there's two people I know of who have tried 2FA. One reported it worked, one reported it didn't. I dunno what's going on there. See https://www.themotte.org/post/20/a-writeup-on-the-reason-the/1039?context=8#context | 1.0 | 2FA may be malfunctioning via Google Authenticator? - So far there's two people I know of who have tried 2FA. One reported it worked, one reported it didn't. I dunno what's going on there. See https://www.themotte.org/post/20/a-writeup-on-the-reason-the/1039?context=8#context | non_main | may be malfunctioning via google authenticator so far there s two people i know of who have tried one reported it worked one reported it didn t i dunno what s going on there see | 0 |
2,691 | 9,396,179,707 | IssuesEvent | 2019-04-08 06:20:06 | RalfKoban/MiKo-Analyzers | https://api.github.com/repos/RalfKoban/MiKo-Analyzers | closed | Methods that return IEnumerable should never return null | Area: analyzer Area: maintainability feature review | Methods that return `IEnumerable` are expected to be used in `foreach` loops or `Linq` queries.
As it's completely unexpected for developers to get a `NullReferenceException` or `ArgumentNullException` being thrown at such place, such situation should be avoided.
To avoid those situations, such methods are _**NOT**_ allowed to return `null`. | True | Methods that return IEnumerable should never return null - Methods that return `IEnumerable` are expected to be used in `foreach` loops or `Linq` queries.
As it's completely unexpected for developers to get a `NullReferenceException` or `ArgumentNullException` being thrown at such place, such situation should be avoided.
To avoid those situations, such methods are _**NOT**_ allowed to return `null`. | main | methods that return ienumerable should never return null methods that return ienumerable are expected to be used in foreach loops or linq queries as it s completely unexpected for developers to get a nullreferenceexception or argumentnullexception being thrown at such place such situation should be avoided to avoid those situations such methods are not allowed to return null | 1 |
5,278 | 26,671,726,137 | IssuesEvent | 2023-01-26 10:52:28 | beyarkay/eskom-calendar | https://api.github.com/repos/beyarkay/eskom-calendar | closed | Schedule missing for Manguang | bug waiting-on-maintainer missing-area-schedule | Power appears to be provided by Centlec: https://www.centlec.co.za/LoadShedding/LoadSheddingDocuments with the schedule available [here](https://www.centlec.co.za/LoadShedding/ViewFile?filePath=wwwroot%2FUploadedFiles%2FLoadShedding%2FCENTLEC%20Load%20Shedding%20Schedule%202022%20Updated_245d.pdf). | True | Schedule missing for Manguang - Power appears to be provided by Centlec: https://www.centlec.co.za/LoadShedding/LoadSheddingDocuments with the schedule available [here](https://www.centlec.co.za/LoadShedding/ViewFile?filePath=wwwroot%2FUploadedFiles%2FLoadShedding%2FCENTLEC%20Load%20Shedding%20Schedule%202022%20Updated_245d.pdf). | main | schedule missing for manguang power appears to be provided by centlec with the schedule available | 1 |
419,155 | 12,218,290,662 | IssuesEvent | 2020-05-01 19:00:55 | vz-risk/VCDB | https://api.github.com/repos/vz-risk/VCDB | opened | Nine million logs of Brits' road journeys spill onto the internet from password-less number-plate camera dashboard | Breach Error Priority 2020 | https://www.theregister.co.uk/2020/04/28/anpr_sheffield_council/ | 1.0 | Nine million logs of Brits' road journeys spill onto the internet from password-less number-plate camera dashboard - https://www.theregister.co.uk/2020/04/28/anpr_sheffield_council/ | non_main | nine million logs of brits road journeys spill onto the internet from password less number plate camera dashboard | 0 |
68,857 | 8,357,293,065 | IssuesEvent | 2018-10-02 21:05:53 | AndrewOkonar/5cube | https://api.github.com/repos/AndrewOkonar/5cube | closed | Kasino | Design | <b>Manager Name:</b> Zkladov
<b>Client Name:</b> Carlos
<b>Contact with manager:</b> Slack
<b>Contact with client: Slack</b> (Zkladov) | 1.0 | Kasino - <b>Manager Name:</b> Zkladov
<b>Client Name:</b> Carlos
<b>Contact with manager:</b> Slack
<b>Contact with client: Slack</b> (Zkladov) | non_main | kasino manager name zkladov client name carlos contact with manager slack contact with client slack zkladov | 0 |
270,448 | 23,509,802,950 | IssuesEvent | 2022-08-18 15:30:14 | ubtue/DatenProbleme | https://api.github.com/repos/ubtue/DatenProbleme | closed | ISSN 0269-1205 | Literature and Theology (Oxford) | Berichtsjahr | ready for testing Zotero_SEMI-AUTO | #### URL
https://academic.oup.com/litthe/issue/35/4
#### Import-Translator
Einzel- und Mehrfachimport:
ubtue_Oxford Academic.js
### Problembeschreibung
Das Heft hat als Berichtsjahr **2021**. Allerdings wird überall als Jahr das Erscheinungsjahr online importiert:

| 1.0 | ISSN 0269-1205 | Literature and Theology (Oxford) | Berichtsjahr - #### URL
https://academic.oup.com/litthe/issue/35/4
#### Import-Translator
Einzel- und Mehrfachimport:
ubtue_Oxford Academic.js
### Problembeschreibung
Das Heft hat als Berichtsjahr **2021**. Allerdings wird überall als Jahr das Erscheinungsjahr online importiert:

| non_main | issn literature and theology oxford berichtsjahr url import translator einzel und mehrfachimport ubtue oxford academic js problembeschreibung das heft hat als berichtsjahr allerdings wird überall als jahr das erscheinungsjahr online importiert | 0 |
2,112 | 7,187,125,235 | IssuesEvent | 2018-02-02 03:05:29 | Microsoft/DirectXMath | https://api.github.com/repos/Microsoft/DirectXMath | closed | Remove 17.1 compiler support | maintainence | When the Xbox One XDK platform drops support for the 17.1 compiler, I can remove one remaining adapter:
- Remove `XM_CTOR_DEFAULT` adapter and replace it with `=default`
- The ``XMMatrixMultiply`` and ``XMMatrixMultiplyTranspose`` implementation also have a workaround for compilers prior to VS 2013 | True | Remove 17.1 compiler support - When the Xbox One XDK platform drops support for the 17.1 compiler, I can remove one remaining adapter:
- Remove `XM_CTOR_DEFAULT` adapter and replace it with `=default`
- The ``XMMatrixMultiply`` and ``XMMatrixMultiplyTranspose`` implementation also have a workaround for compilers prior to VS 2013 | main | remove compiler support when the xbox one xdk platform drops support for the compiler i can remove one remaining adapter remove xm ctor default adapter and replace it with default the xmmatrixmultiply and xmmatrixmultiplytranspose implementation also have a workaround for compilers prior to vs | 1 |
475,982 | 13,731,539,624 | IssuesEvent | 2020-10-05 01:25:18 | iragm/fishauctions | https://api.github.com/repos/iragm/fishauctions | closed | Add breederboard showing top breeders | priority | Two I would like to see are most lots posted and greatest diversity of fish posted. | 1.0 | Add breederboard showing top breeders - Two I would like to see are most lots posted and greatest diversity of fish posted. | non_main | add breederboard showing top breeders two i would like to see are most lots posted and greatest diversity of fish posted | 0 |
6,456 | 2,588,156,728 | IssuesEvent | 2015-02-17 23:00:04 | PresConsUIUC/PSAP | https://api.github.com/repos/PresConsUIUC/PSAP | closed | Delete unused FIDG images | priority-low task | Not urgent at all, but FYI:
The code I wrote that imports FIDG content into the application is able to compile a list of unused images. These are images that exist somewhere inside the FormatIDGuide-HTML folder, but are never referenced in img tags, and thus aren't showing up anywhere. Here is the current list.
Feel free to delete them if you know they are no longer needed.
I'm not assigning this to anyone or any milestone in particular, and it's OK if it never gets done. This is a rainy-day, "I'm so bored I could almost do this" task.
(Ignore everything before FormatIDGuide-HTML in the paths)
/Volumes/Data/alexd/Projects/psap/db/seed_data/FormatIDGuide-HTML/profiles/avmedia/images/2inORAudioLarge.jpg
/Volumes/Data/alexd/Projects/psap/db/seed_data/FormatIDGuide-HTML/profiles/avmedia/images/audiotape-or-quarter3.jpg
/Volumes/Data/alexd/Projects/psap/db/seed_data/FormatIDGuide-HTML/profiles/avmedia/images/cylinder-moldy.jpg
/Volumes/Data/alexd/Projects/psap/db/seed_data/FormatIDGuide-HTML/profiles/avmedia/images/cylinder-wax-brown-damaged1.jpg
/Volumes/Data/alexd/Projects/psap/db/seed_data/FormatIDGuide-HTML/profiles/avmedia/images/cylinder-wax-brown-damaged1@2x.jpg
/Volumes/Data/alexd/Projects/psap/db/seed_data/FormatIDGuide-HTML/profiles/avmedia/images/dvd_thumb_150x.jpg
/Volumes/Data/alexd/Projects/psap/db/seed_data/FormatIDGuide-HTML/profiles/avmedia/images/film-16mm-magstock.jpg
/Volumes/Data/alexd/Projects/psap/db/seed_data/FormatIDGuide-HTML/profiles/avmedia/images/film-16mm-magstock@2x.jpg
/Volumes/Data/alexd/Projects/psap/db/seed_data/FormatIDGuide-HTML/profiles/avmedia/images/Film-Gauges.jpg
/Volumes/Data/alexd/Projects/psap/db/seed_data/FormatIDGuide-HTML/profiles/avmedia/images/film-polyester-lightpiping-ucla.jpg
/Volumes/Data/alexd/Projects/psap/db/seed_data/FormatIDGuide-HTML/profiles/avmedia/images/film-polyester-lightpiping-ucla@2x.jpg
/Volumes/Data/alexd/Projects/psap/db/seed_data/FormatIDGuide-HTML/profiles/avmedia/images/filmcore-ucla.jpg
/Volumes/Data/alexd/Projects/psap/db/seed_data/FormatIDGuide-HTML/profiles/avmedia/images/filmcore-ucla@2x.jpg
/Volumes/Data/alexd/Projects/psap/db/seed_data/FormatIDGuide-HTML/profiles/avmedia/images/filmreel-wcan-ucla.jpg
/Volumes/Data/alexd/Projects/psap/db/seed_data/FormatIDGuide-HTML/profiles/avmedia/images/filmreel-wcan-ucla@2x.jpg
/Volumes/Data/alexd/Projects/psap/db/seed_data/FormatIDGuide-HTML/profiles/avmedia/images/large_plasticCyl.jpg
/Volumes/Data/alexd/Projects/psap/db/seed_data/FormatIDGuide-HTML/profiles/avmedia/images/large_waxCyl.jpg
/Volumes/Data/alexd/Projects/psap/db/seed_data/FormatIDGuide-HTML/profiles/avmedia/images/magnetictape-shedonguide.JPG
/Volumes/Data/alexd/Projects/psap/db/seed_data/FormatIDGuide-HTML/profiles/avmedia/images/magnetictape-shedonhead.JPG
/Volumes/Data/alexd/Projects/psap/db/seed_data/FormatIDGuide-HTML/profiles/avmedia/images/magnetictape-shedonhead@2x.jpg
/Volumes/Data/alexd/Projects/psap/db/seed_data/FormatIDGuide-HTML/profiles/avmedia/images/polyesterfilm-lightpipe-ucla.jpg
/Volumes/Data/alexd/Projects/psap/db/seed_data/FormatIDGuide-HTML/profiles/avmedia/images/polyesterfilm-lightpipe-ucla@2x.jpg
/Volumes/Data/alexd/Projects/psap/db/seed_data/FormatIDGuide-HTML/profiles/avmedia/images/quarterInchAudio_Thumb_150.jpg
/Volumes/Data/alexd/Projects/psap/db/seed_data/FormatIDGuide-HTML/profiles/avmedia/images/record-12in-lacquerdisc.jpg
/Volumes/Data/alexd/Projects/psap/db/seed_data/FormatIDGuide-HTML/profiles/avmedia/images/record-12in-vinyldisc.jpg
/Volumes/Data/alexd/Projects/psap/db/seed_data/FormatIDGuide-HTML/profiles/avmedia/images/record-45rpm-slystone.jpg
/Volumes/Data/alexd/Projects/psap/db/seed_data/FormatIDGuide-HTML/profiles/avmedia/images/record-45rpm-slystone@2x.jpg
/Volumes/Data/alexd/Projects/psap/db/seed_data/FormatIDGuide-HTML/profiles/avmedia/images/record-edisondiamonddisc.jpg
/Volumes/Data/alexd/Projects/psap/db/seed_data/FormatIDGuide-HTML/profiles/avmedia/images/record-lacquer-scale1.jpg
/Volumes/Data/alexd/Projects/psap/db/seed_data/FormatIDGuide-HTML/profiles/avmedia/images/record-lacquer-scale1@2x.jpg
/Volumes/Data/alexd/Projects/psap/db/seed_data/FormatIDGuide-HTML/profiles/avmedia/images/UNC_Safetyfilm.jpg
/Volumes/Data/alexd/Projects/psap/db/seed_data/FormatIDGuide-HTML/profiles/papersbooks/images/binding-periodical-sidesewn-flatback1a_cropped.jpg
/Volumes/Data/alexd/Projects/psap/db/seed_data/FormatIDGuide-HTML/profiles/papersbooks/images/binding-sidesewn-marbling1b_cropped.jpg
/Volumes/Data/alexd/Projects/psap/db/seed_data/FormatIDGuide-HTML/profiles/papersbooks/images/binding-sidesewn-marbling1c_cropped.jpg
/Volumes/Data/alexd/Projects/psap/db/seed_data/FormatIDGuide-HTML/profiles/papersbooks/images/book-24310_BT_02_SpineTear_cropped.jpg
/Volumes/Data/alexd/Projects/psap/db/seed_data/FormatIDGuide-HTML/profiles/papersbooks/images/carbon-copy_ink-on-paper_manifold_damaged01.jpg
/Volumes/Data/alexd/Projects/psap/db/seed_data/FormatIDGuide-HTML/profiles/papersbooks/images/ferrogallic-inbook1.jpg
/Volumes/Data/alexd/Projects/psap/db/seed_data/FormatIDGuide-HTML/profiles/papersbooks/images/map-mold-JMcMann1_cropped.jpg
/Volumes/Data/alexd/Projects/psap/db/seed_data/FormatIDGuide-HTML/profiles/papersbooks/images/map-mold-JMcMann3_cropped.jpg
/Volumes/Data/alexd/Projects/psap/db/seed_data/FormatIDGuide-HTML/profiles/papersbooks/images/office-hectograph-12.jpg
/Volumes/Data/alexd/Projects/psap/db/seed_data/FormatIDGuide-HTML/profiles/papersbooks/images/office-hectograph-cropped.jpg
/Volumes/Data/alexd/Projects/psap/db/seed_data/FormatIDGuide-HTML/profiles/papersbooks/images/paper-brittle-group.jpg
/Volumes/Data/alexd/Projects/psap/db/seed_data/FormatIDGuide-HTML/profiles/papersbooks/images/paper-mapfold.jpg
/Volumes/Data/alexd/Projects/psap/db/seed_data/FormatIDGuide-HTML/profiles/papersbooks/images/spirit_duplication_master.jpg
/Volumes/Data/alexd/Projects/psap/db/seed_data/FormatIDGuide-HTML/profiles/papersbooks/images/staplebind.jpg
/Volumes/Data/alexd/Projects/psap/db/seed_data/FormatIDGuide-HTML/profiles/papersbooks/images/thermofax05.jpg
/Volumes/Data/alexd/Projects/psap/db/seed_data/FormatIDGuide-HTML/profiles/photosimages/images/albumen_stereoview2_a.jpg
/Volumes/Data/alexd/Projects/psap/db/seed_data/FormatIDGuide-HTML/profiles/photosimages/images/albumen_stereoview4_a.jpg
/Volumes/Data/alexd/Projects/psap/db/seed_data/FormatIDGuide-HTML/profiles/photosimages/images/chromogenic-5.jpg
/Volumes/Data/alexd/Projects/psap/db/seed_data/FormatIDGuide-HTML/profiles/photosimages/images/collotype02.jpg
/Volumes/Data/alexd/Projects/psap/db/seed_data/FormatIDGuide-HTML/profiles/photosimages/images/daguerreotype3.jpg
/Volumes/Data/alexd/Projects/psap/db/seed_data/FormatIDGuide-HTML/profiles/photosimages/images/gravure_microDetail_aquatint.jpg
/Volumes/Data/alexd/Projects/psap/db/seed_data/FormatIDGuide-HTML/profiles/photosimages/images/gravure_microDetail_screenPattern.jpg
/Volumes/Data/alexd/Projects/psap/db/seed_data/FormatIDGuide-HTML/profiles/photosimages/images/inkjet-uncoatedpaper.jpg
/Volumes/Data/alexd/Projects/psap/db/seed_data/FormatIDGuide-HTML/profiles/photosimages/images/microfiche-sheet_04.jpg
/Volumes/Data/alexd/Projects/psap/db/seed_data/FormatIDGuide-HTML/profiles/photosimages/images/microfiche-sheet_04@2x.jpg
/Volumes/Data/alexd/Projects/psap/db/seed_data/FormatIDGuide-HTML/profiles/photosimages/images/microprint-detail_03.jpg
/Volumes/Data/alexd/Projects/psap/db/seed_data/FormatIDGuide-HTML/profiles/photosimages/images/negative-glass-cons01-backlit.jpg
/Volumes/Data/alexd/Projects/psap/db/seed_data/FormatIDGuide-HTML/profiles/photosimages/images/negative-glass-cons01-backlit@2x.jpg
/Volumes/Data/alexd/Projects/psap/db/seed_data/FormatIDGuide-HTML/profiles/photosimages/images/negative-glass-cons01-unlit.jpg
/Volumes/Data/alexd/Projects/psap/db/seed_data/FormatIDGuide-HTML/profiles/photosimages/images/negative-glass-cons01-unlit@2x.jpg
/Volumes/Data/alexd/Projects/psap/db/seed_data/FormatIDGuide-HTML/profiles/photosimages/images/photo-3printlayers-silver.jpg
/Volumes/Data/alexd/Projects/psap/db/seed_data/FormatIDGuide-HTML/profiles/photosimages/images/photo-3printlayers-silver@2x.jpg
/Volumes/Data/alexd/Projects/psap/db/seed_data/FormatIDGuide-HTML/profiles/photosimages/images/photogravures_booklet03.jpg
/Volumes/Data/alexd/Projects/psap/db/seed_data/FormatIDGuide-HTML/profiles/photosimages/images/photomechanical_letterpresshalftone-detail.jpg
/Volumes/Data/alexd/Projects/psap/db/seed_data/FormatIDGuide-HTML/profiles/photosimages/images/rotogravure4.jpg
/Volumes/Data/alexd/Projects/psap/db/seed_data/FormatIDGuide-HTML/profiles/photosimages/images/rotogravure5-detail.jpg
/Volumes/Data/alexd/Projects/psap/db/seed_data/FormatIDGuide-HTML/profiles/photosimages/images/rotogravure6.jpg
/Volumes/Data/alexd/Projects/psap/db/seed_data/FormatIDGuide-HTML/profiles/photosimages/images/woodburtype_45_object_55_ULTRALARGE.jpg
/Volumes/Data/alexd/Projects/psap/db/seed_data/FormatIDGuide-HTML/profiles/photosimages/images/woodburtype_45_object_60_ULTRALARGE.jpg
/Volumes/Data/alexd/Projects/psap/db/seed_data/FormatIDGuide-HTML/profiles/photosimages/images/woodburtype_45_object_61_ULTRALARGE.jpg
/Volumes/Data/alexd/Projects/psap/db/seed_data/FormatIDGuide-HTML/profiles/photosimages/images/woodburtype_45_object_63_ULTRALARGE.jpg
/Volumes/Data/alexd/Projects/psap/db/seed_data/FormatIDGuide-HTML/profiles/photosimages/images/woodburytype_01_front_45_8x10_ULTRALARGE.jpg
/Volumes/Data/alexd/Projects/psap/db/seed_data/FormatIDGuide-HTML/profiles/photosimages/images/woodburytype_ID_Usage2_Fullscreen-detail1.jpg
/Volumes/Data/alexd/Projects/psap/db/seed_data/FormatIDGuide-HTML/supplementary/images/adhesive-cellotape-scissors-Philippa-Willitts.jpg
/Volumes/Data/alexd/Projects/psap/db/seed_data/FormatIDGuide-HTML/supplementary/images/adhesive-masking-taperolls.jpg
/Volumes/Data/alexd/Projects/psap/db/seed_data/FormatIDGuide-HTML/supplementary/images/oversize-storage_flat-files04.jpg
/Volumes/Data/alexd/Projects/psap/db/seed_data/FormatIDGuide-HTML/supplementary/images/oversize-storage_vertical-hanging-storage06.jpg
/Volumes/Data/alexd/Projects/psap/db/seed_data/FormatIDGuide-HTML/supplementary/images/record-storage.jpg
/Volumes/Data/alexd/Projects/psap/db/seed_data/FormatIDGuide-HTML/supplementary/images/record-storage@2x.JPG
/Volumes/Data/alexd/Projects/psap/db/seed_data/FormatIDGuide-HTML/supplementary/images/storage_steel-file-cabinet01.jpg | 1.0 | Delete unused FIDG images - Not urgent at all, but FYI:
The code I wrote that imports FIDG content into the application is able to compile a list of unused images. These are images that exist somewhere inside the FormatIDGuide-HTML folder, but are never referenced in img tags, and thus aren't showing up anywhere. Here is the current list.
Feel free to delete them if you know they are no longer needed.
I'm not assigning this to anyone or any milestone in particular, and it's OK if it never gets done. This is a rainy-day, "I'm so bored I could almost do this" task.
(Ignore everything before FormatIDGuide-HTML in the paths)
/Volumes/Data/alexd/Projects/psap/db/seed_data/FormatIDGuide-HTML/profiles/avmedia/images/2inORAudioLarge.jpg
/Volumes/Data/alexd/Projects/psap/db/seed_data/FormatIDGuide-HTML/profiles/avmedia/images/audiotape-or-quarter3.jpg
/Volumes/Data/alexd/Projects/psap/db/seed_data/FormatIDGuide-HTML/profiles/avmedia/images/cylinder-moldy.jpg
/Volumes/Data/alexd/Projects/psap/db/seed_data/FormatIDGuide-HTML/profiles/avmedia/images/cylinder-wax-brown-damaged1.jpg
/Volumes/Data/alexd/Projects/psap/db/seed_data/FormatIDGuide-HTML/profiles/avmedia/images/cylinder-wax-brown-damaged1@2x.jpg
/Volumes/Data/alexd/Projects/psap/db/seed_data/FormatIDGuide-HTML/profiles/avmedia/images/dvd_thumb_150x.jpg
/Volumes/Data/alexd/Projects/psap/db/seed_data/FormatIDGuide-HTML/profiles/avmedia/images/film-16mm-magstock.jpg
/Volumes/Data/alexd/Projects/psap/db/seed_data/FormatIDGuide-HTML/profiles/avmedia/images/film-16mm-magstock@2x.jpg
/Volumes/Data/alexd/Projects/psap/db/seed_data/FormatIDGuide-HTML/profiles/avmedia/images/Film-Gauges.jpg
/Volumes/Data/alexd/Projects/psap/db/seed_data/FormatIDGuide-HTML/profiles/avmedia/images/film-polyester-lightpiping-ucla.jpg
/Volumes/Data/alexd/Projects/psap/db/seed_data/FormatIDGuide-HTML/profiles/avmedia/images/film-polyester-lightpiping-ucla@2x.jpg
/Volumes/Data/alexd/Projects/psap/db/seed_data/FormatIDGuide-HTML/profiles/avmedia/images/filmcore-ucla.jpg
/Volumes/Data/alexd/Projects/psap/db/seed_data/FormatIDGuide-HTML/profiles/avmedia/images/filmcore-ucla@2x.jpg
/Volumes/Data/alexd/Projects/psap/db/seed_data/FormatIDGuide-HTML/profiles/avmedia/images/filmreel-wcan-ucla.jpg
/Volumes/Data/alexd/Projects/psap/db/seed_data/FormatIDGuide-HTML/profiles/avmedia/images/filmreel-wcan-ucla@2x.jpg
/Volumes/Data/alexd/Projects/psap/db/seed_data/FormatIDGuide-HTML/profiles/avmedia/images/large_plasticCyl.jpg
/Volumes/Data/alexd/Projects/psap/db/seed_data/FormatIDGuide-HTML/profiles/avmedia/images/large_waxCyl.jpg
/Volumes/Data/alexd/Projects/psap/db/seed_data/FormatIDGuide-HTML/profiles/avmedia/images/magnetictape-shedonguide.JPG
/Volumes/Data/alexd/Projects/psap/db/seed_data/FormatIDGuide-HTML/profiles/avmedia/images/magnetictape-shedonhead.JPG
/Volumes/Data/alexd/Projects/psap/db/seed_data/FormatIDGuide-HTML/profiles/avmedia/images/magnetictape-shedonhead@2x.jpg
/Volumes/Data/alexd/Projects/psap/db/seed_data/FormatIDGuide-HTML/profiles/avmedia/images/polyesterfilm-lightpipe-ucla.jpg
/Volumes/Data/alexd/Projects/psap/db/seed_data/FormatIDGuide-HTML/profiles/avmedia/images/polyesterfilm-lightpipe-ucla@2x.jpg
/Volumes/Data/alexd/Projects/psap/db/seed_data/FormatIDGuide-HTML/profiles/avmedia/images/quarterInchAudio_Thumb_150.jpg
/Volumes/Data/alexd/Projects/psap/db/seed_data/FormatIDGuide-HTML/profiles/avmedia/images/record-12in-lacquerdisc.jpg
/Volumes/Data/alexd/Projects/psap/db/seed_data/FormatIDGuide-HTML/profiles/avmedia/images/record-12in-vinyldisc.jpg
/Volumes/Data/alexd/Projects/psap/db/seed_data/FormatIDGuide-HTML/profiles/avmedia/images/record-45rpm-slystone.jpg
/Volumes/Data/alexd/Projects/psap/db/seed_data/FormatIDGuide-HTML/profiles/avmedia/images/record-45rpm-slystone@2x.jpg
/Volumes/Data/alexd/Projects/psap/db/seed_data/FormatIDGuide-HTML/profiles/avmedia/images/record-edisondiamonddisc.jpg
/Volumes/Data/alexd/Projects/psap/db/seed_data/FormatIDGuide-HTML/profiles/avmedia/images/record-lacquer-scale1.jpg
/Volumes/Data/alexd/Projects/psap/db/seed_data/FormatIDGuide-HTML/profiles/avmedia/images/record-lacquer-scale1@2x.jpg
/Volumes/Data/alexd/Projects/psap/db/seed_data/FormatIDGuide-HTML/profiles/avmedia/images/UNC_Safetyfilm.jpg
/Volumes/Data/alexd/Projects/psap/db/seed_data/FormatIDGuide-HTML/profiles/papersbooks/images/binding-periodical-sidesewn-flatback1a_cropped.jpg
/Volumes/Data/alexd/Projects/psap/db/seed_data/FormatIDGuide-HTML/profiles/papersbooks/images/binding-sidesewn-marbling1b_cropped.jpg
/Volumes/Data/alexd/Projects/psap/db/seed_data/FormatIDGuide-HTML/profiles/papersbooks/images/binding-sidesewn-marbling1c_cropped.jpg
/Volumes/Data/alexd/Projects/psap/db/seed_data/FormatIDGuide-HTML/profiles/papersbooks/images/book-24310_BT_02_SpineTear_cropped.jpg
/Volumes/Data/alexd/Projects/psap/db/seed_data/FormatIDGuide-HTML/profiles/papersbooks/images/carbon-copy_ink-on-paper_manifold_damaged01.jpg
/Volumes/Data/alexd/Projects/psap/db/seed_data/FormatIDGuide-HTML/profiles/papersbooks/images/ferrogallic-inbook1.jpg
/Volumes/Data/alexd/Projects/psap/db/seed_data/FormatIDGuide-HTML/profiles/papersbooks/images/map-mold-JMcMann1_cropped.jpg
/Volumes/Data/alexd/Projects/psap/db/seed_data/FormatIDGuide-HTML/profiles/papersbooks/images/map-mold-JMcMann3_cropped.jpg
/Volumes/Data/alexd/Projects/psap/db/seed_data/FormatIDGuide-HTML/profiles/papersbooks/images/office-hectograph-12.jpg
/Volumes/Data/alexd/Projects/psap/db/seed_data/FormatIDGuide-HTML/profiles/papersbooks/images/office-hectograph-cropped.jpg
/Volumes/Data/alexd/Projects/psap/db/seed_data/FormatIDGuide-HTML/profiles/papersbooks/images/paper-brittle-group.jpg
/Volumes/Data/alexd/Projects/psap/db/seed_data/FormatIDGuide-HTML/profiles/papersbooks/images/paper-mapfold.jpg
/Volumes/Data/alexd/Projects/psap/db/seed_data/FormatIDGuide-HTML/profiles/papersbooks/images/spirit_duplication_master.jpg
/Volumes/Data/alexd/Projects/psap/db/seed_data/FormatIDGuide-HTML/profiles/papersbooks/images/staplebind.jpg
/Volumes/Data/alexd/Projects/psap/db/seed_data/FormatIDGuide-HTML/profiles/papersbooks/images/thermofax05.jpg
/Volumes/Data/alexd/Projects/psap/db/seed_data/FormatIDGuide-HTML/profiles/photosimages/images/albumen_stereoview2_a.jpg
/Volumes/Data/alexd/Projects/psap/db/seed_data/FormatIDGuide-HTML/profiles/photosimages/images/albumen_stereoview4_a.jpg
/Volumes/Data/alexd/Projects/psap/db/seed_data/FormatIDGuide-HTML/profiles/photosimages/images/chromogenic-5.jpg
/Volumes/Data/alexd/Projects/psap/db/seed_data/FormatIDGuide-HTML/profiles/photosimages/images/collotype02.jpg
/Volumes/Data/alexd/Projects/psap/db/seed_data/FormatIDGuide-HTML/profiles/photosimages/images/daguerreotype3.jpg
/Volumes/Data/alexd/Projects/psap/db/seed_data/FormatIDGuide-HTML/profiles/photosimages/images/gravure_microDetail_aquatint.jpg
/Volumes/Data/alexd/Projects/psap/db/seed_data/FormatIDGuide-HTML/profiles/photosimages/images/gravure_microDetail_screenPattern.jpg
/Volumes/Data/alexd/Projects/psap/db/seed_data/FormatIDGuide-HTML/profiles/photosimages/images/inkjet-uncoatedpaper.jpg
/Volumes/Data/alexd/Projects/psap/db/seed_data/FormatIDGuide-HTML/profiles/photosimages/images/microfiche-sheet_04.jpg
/Volumes/Data/alexd/Projects/psap/db/seed_data/FormatIDGuide-HTML/profiles/photosimages/images/microfiche-sheet_04@2x.jpg
/Volumes/Data/alexd/Projects/psap/db/seed_data/FormatIDGuide-HTML/profiles/photosimages/images/microprint-detail_03.jpg
/Volumes/Data/alexd/Projects/psap/db/seed_data/FormatIDGuide-HTML/profiles/photosimages/images/negative-glass-cons01-backlit.jpg
/Volumes/Data/alexd/Projects/psap/db/seed_data/FormatIDGuide-HTML/profiles/photosimages/images/negative-glass-cons01-backlit@2x.jpg
/Volumes/Data/alexd/Projects/psap/db/seed_data/FormatIDGuide-HTML/profiles/photosimages/images/negative-glass-cons01-unlit.jpg
/Volumes/Data/alexd/Projects/psap/db/seed_data/FormatIDGuide-HTML/profiles/photosimages/images/negative-glass-cons01-unlit@2x.jpg
/Volumes/Data/alexd/Projects/psap/db/seed_data/FormatIDGuide-HTML/profiles/photosimages/images/photo-3printlayers-silver.jpg
/Volumes/Data/alexd/Projects/psap/db/seed_data/FormatIDGuide-HTML/profiles/photosimages/images/photo-3printlayers-silver@2x.jpg
/Volumes/Data/alexd/Projects/psap/db/seed_data/FormatIDGuide-HTML/profiles/photosimages/images/photogravures_booklet03.jpg
/Volumes/Data/alexd/Projects/psap/db/seed_data/FormatIDGuide-HTML/profiles/photosimages/images/photomechanical_letterpresshalftone-detail.jpg
/Volumes/Data/alexd/Projects/psap/db/seed_data/FormatIDGuide-HTML/profiles/photosimages/images/rotogravure4.jpg
/Volumes/Data/alexd/Projects/psap/db/seed_data/FormatIDGuide-HTML/profiles/photosimages/images/rotogravure5-detail.jpg
/Volumes/Data/alexd/Projects/psap/db/seed_data/FormatIDGuide-HTML/profiles/photosimages/images/rotogravure6.jpg
/Volumes/Data/alexd/Projects/psap/db/seed_data/FormatIDGuide-HTML/profiles/photosimages/images/woodburtype_45_object_55_ULTRALARGE.jpg
/Volumes/Data/alexd/Projects/psap/db/seed_data/FormatIDGuide-HTML/profiles/photosimages/images/woodburtype_45_object_60_ULTRALARGE.jpg
/Volumes/Data/alexd/Projects/psap/db/seed_data/FormatIDGuide-HTML/profiles/photosimages/images/woodburtype_45_object_61_ULTRALARGE.jpg
/Volumes/Data/alexd/Projects/psap/db/seed_data/FormatIDGuide-HTML/profiles/photosimages/images/woodburtype_45_object_63_ULTRALARGE.jpg
/Volumes/Data/alexd/Projects/psap/db/seed_data/FormatIDGuide-HTML/profiles/photosimages/images/woodburytype_01_front_45_8x10_ULTRALARGE.jpg
/Volumes/Data/alexd/Projects/psap/db/seed_data/FormatIDGuide-HTML/profiles/photosimages/images/woodburytype_ID_Usage2_Fullscreen-detail1.jpg
/Volumes/Data/alexd/Projects/psap/db/seed_data/FormatIDGuide-HTML/supplementary/images/adhesive-cellotape-scissors-Philippa-Willitts.jpg
/Volumes/Data/alexd/Projects/psap/db/seed_data/FormatIDGuide-HTML/supplementary/images/adhesive-masking-taperolls.jpg
/Volumes/Data/alexd/Projects/psap/db/seed_data/FormatIDGuide-HTML/supplementary/images/oversize-storage_flat-files04.jpg
/Volumes/Data/alexd/Projects/psap/db/seed_data/FormatIDGuide-HTML/supplementary/images/oversize-storage_vertical-hanging-storage06.jpg
/Volumes/Data/alexd/Projects/psap/db/seed_data/FormatIDGuide-HTML/supplementary/images/record-storage.jpg
/Volumes/Data/alexd/Projects/psap/db/seed_data/FormatIDGuide-HTML/supplementary/images/record-storage@2x.JPG
/Volumes/Data/alexd/Projects/psap/db/seed_data/FormatIDGuide-HTML/supplementary/images/storage_steel-file-cabinet01.jpg | non_main | delete unused fidg images not urgent at all but fyi the code i wrote that imports fidg content into the application is able to compile a list of unused images these are images that exist somewhere inside the formatidguide html folder but are never referenced in img tags and thus aren t showing up anywhere here is the current list feel free to delete them if you know they are no longer needed i m not assigning this to anyone or any milestone in particular and it s ok if it never gets done this is a rainy day i m so bored i could almost do this task ignore everything before formatidguide html in the paths volumes data alexd projects psap db seed data formatidguide html profiles avmedia images jpg volumes data alexd projects psap db seed data formatidguide html profiles avmedia images audiotape or jpg volumes data alexd projects psap db seed data formatidguide html profiles avmedia images cylinder moldy jpg volumes data alexd projects psap db seed data formatidguide html profiles avmedia images cylinder wax brown jpg volumes data alexd projects psap db seed data formatidguide html profiles avmedia images cylinder wax brown jpg volumes data alexd projects psap db seed data formatidguide html profiles avmedia images dvd thumb jpg volumes data alexd projects psap db seed data formatidguide html profiles avmedia images film magstock jpg volumes data alexd projects psap db seed data formatidguide html profiles avmedia images film magstock jpg volumes data alexd projects psap db seed data formatidguide html profiles avmedia images film gauges jpg volumes data alexd projects psap db seed data formatidguide html profiles avmedia images film polyester lightpiping ucla jpg volumes data alexd projects psap db seed data formatidguide html profiles avmedia images film polyester lightpiping ucla jpg volumes data alexd projects psap db seed data formatidguide html profiles avmedia images filmcore ucla jpg volumes data alexd projects psap db seed data formatidguide html profiles avmedia images filmcore ucla jpg volumes data alexd projects psap db seed data formatidguide html profiles avmedia images filmreel wcan ucla jpg volumes data alexd projects psap db seed data formatidguide html profiles avmedia images filmreel wcan ucla jpg volumes data alexd projects psap db seed data formatidguide html profiles avmedia images large plasticcyl jpg volumes data alexd projects psap db seed data formatidguide html profiles avmedia images large waxcyl jpg volumes data alexd projects psap db seed data formatidguide html profiles avmedia images magnetictape shedonguide jpg volumes data alexd projects psap db seed data formatidguide html profiles avmedia images magnetictape shedonhead jpg volumes data alexd projects psap db seed data formatidguide html profiles avmedia images magnetictape shedonhead jpg volumes data alexd projects psap db seed data formatidguide html profiles avmedia images polyesterfilm lightpipe ucla jpg volumes data alexd projects psap db seed data formatidguide html profiles avmedia images polyesterfilm lightpipe ucla jpg volumes data alexd projects psap db seed data formatidguide html profiles avmedia images quarterinchaudio thumb jpg volumes data alexd projects psap db seed data formatidguide html profiles avmedia images record lacquerdisc jpg volumes data alexd projects psap db seed data formatidguide html profiles avmedia images record vinyldisc jpg volumes data alexd projects psap db seed data formatidguide html profiles avmedia images record slystone jpg volumes data alexd projects psap db seed data formatidguide html profiles avmedia images record slystone jpg volumes data alexd projects psap db seed data formatidguide html profiles avmedia images record edisondiamonddisc jpg volumes data alexd projects psap db seed data formatidguide html profiles avmedia images record lacquer jpg volumes data alexd projects psap db seed data formatidguide html profiles avmedia images record lacquer jpg volumes data alexd projects psap db seed data formatidguide html profiles avmedia images unc safetyfilm jpg volumes data alexd projects psap db seed data formatidguide html profiles papersbooks images binding periodical sidesewn cropped jpg volumes data alexd projects psap db seed data formatidguide html profiles papersbooks images binding sidesewn cropped jpg volumes data alexd projects psap db seed data formatidguide html profiles papersbooks images binding sidesewn cropped jpg volumes data alexd projects psap db seed data formatidguide html profiles papersbooks images book bt spinetear cropped jpg volumes data alexd projects psap db seed data formatidguide html profiles papersbooks images carbon copy ink on paper manifold jpg volumes data alexd projects psap db seed data formatidguide html profiles papersbooks images ferrogallic jpg volumes data alexd projects psap db seed data formatidguide html profiles papersbooks images map mold cropped jpg volumes data alexd projects psap db seed data formatidguide html profiles papersbooks images map mold cropped jpg volumes data alexd projects psap db seed data formatidguide html profiles papersbooks images office hectograph jpg volumes data alexd projects psap db seed data formatidguide html profiles papersbooks images office hectograph cropped jpg volumes data alexd projects psap db seed data formatidguide html profiles papersbooks images paper brittle group jpg volumes data alexd projects psap db seed data formatidguide html profiles papersbooks images paper mapfold jpg volumes data alexd projects psap db seed data formatidguide html profiles papersbooks images spirit duplication master jpg volumes data alexd projects psap db seed data formatidguide html profiles papersbooks images staplebind jpg volumes data alexd projects psap db seed data formatidguide html profiles papersbooks images jpg volumes data alexd projects psap db seed data formatidguide html profiles photosimages images albumen a jpg volumes data alexd projects psap db seed data formatidguide html profiles photosimages images albumen a jpg volumes data alexd projects psap db seed data formatidguide html profiles photosimages images chromogenic jpg volumes data alexd projects psap db seed data formatidguide html profiles photosimages images jpg volumes data alexd projects psap db seed data formatidguide html profiles photosimages images jpg volumes data alexd projects psap db seed data formatidguide html profiles photosimages images gravure microdetail aquatint jpg volumes data alexd projects psap db seed data formatidguide html profiles photosimages images gravure microdetail screenpattern jpg volumes data alexd projects psap db seed data formatidguide html profiles photosimages images inkjet uncoatedpaper jpg volumes data alexd projects psap db seed data formatidguide html profiles photosimages images microfiche sheet jpg volumes data alexd projects psap db seed data formatidguide html profiles photosimages images microfiche sheet jpg volumes data alexd projects psap db seed data formatidguide html profiles photosimages images microprint detail jpg volumes data alexd projects psap db seed data formatidguide html profiles photosimages images negative glass backlit jpg volumes data alexd projects psap db seed data formatidguide html profiles photosimages images negative glass backlit jpg volumes data alexd projects psap db seed data formatidguide html profiles photosimages images negative glass unlit jpg volumes data alexd projects psap db seed data formatidguide html profiles photosimages images negative glass unlit jpg volumes data alexd projects psap db seed data formatidguide html profiles photosimages images photo silver jpg volumes data alexd projects psap db seed data formatidguide html profiles photosimages images photo silver jpg volumes data alexd projects psap db seed data formatidguide html profiles photosimages images photogravures jpg volumes data alexd projects psap db seed data formatidguide html profiles photosimages images photomechanical letterpresshalftone detail jpg volumes data alexd projects psap db seed data formatidguide html profiles photosimages images jpg volumes data alexd projects psap db seed data formatidguide html profiles photosimages images detail jpg volumes data alexd projects psap db seed data formatidguide html profiles photosimages images jpg volumes data alexd projects psap db seed data formatidguide html profiles photosimages images woodburtype object ultralarge jpg volumes data alexd projects psap db seed data formatidguide html profiles photosimages images woodburtype object ultralarge jpg volumes data alexd projects psap db seed data formatidguide html profiles photosimages images woodburtype object ultralarge jpg volumes data alexd projects psap db seed data formatidguide html profiles photosimages images woodburtype object ultralarge jpg volumes data alexd projects psap db seed data formatidguide html profiles photosimages images woodburytype front ultralarge jpg volumes data alexd projects psap db seed data formatidguide html profiles photosimages images woodburytype id fullscreen jpg volumes data alexd projects psap db seed data formatidguide html supplementary images adhesive cellotape scissors philippa willitts jpg volumes data alexd projects psap db seed data formatidguide html supplementary images adhesive masking taperolls jpg volumes data alexd projects psap db seed data formatidguide html supplementary images oversize storage flat jpg volumes data alexd projects psap db seed data formatidguide html supplementary images oversize storage vertical hanging jpg volumes data alexd projects psap db seed data formatidguide html supplementary images record storage jpg volumes data alexd projects psap db seed data formatidguide html supplementary images record storage jpg volumes data alexd projects psap db seed data formatidguide html supplementary images storage steel file jpg | 0 |
2,940 | 10,563,169,248 | IssuesEvent | 2019-10-04 20:14:16 | FairwindsOps/reckoner | https://api.github.com/repos/FairwindsOps/reckoner | closed | Schema validation | Maintainability Usability enhancement stretch | Reckoner should always evaluate the course.yaml with a true schema validation before processing any actions.
* Would lead to less half baked runs
* Have less burden on the user to constantly try syntax
* Better user experience for unused blocks of yaml
* Hopefully could provide a reference for users wanting to know the full breadth of options in a course.yaml
| True | Schema validation - Reckoner should always evaluate the course.yaml with a true schema validation before processing any actions.
* Would lead to less half baked runs
* Have less burden on the user to constantly try syntax
* Better user experience for unused blocks of yaml
* Hopefully could provide a reference for users wanting to know the full breadth of options in a course.yaml
| main | schema validation reckoner should always evaluate the course yaml with a true schema validation before processing any actions would lead to less half baked runs have less burden on the user to constantly try syntax better user experience for unused blocks of yaml hopefully could provide a reference for users wanting to know the full breadth of options in a course yaml | 1 |
3,259 | 12,413,819,721 | IssuesEvent | 2020-05-22 13:27:49 | ansible/ansible | https://api.github.com/repos/ansible/ansible | closed | some aireos modules have options which should have been removed for Ansible 2.9 | affects_2.10 aireos bug cisco collection collection:community.network module needs_collection_redirect needs_maintainer needs_triage networking support:community | ##### SUMMARY
As detected by https://github.com/ansible/ansible/pull/66920, (some of) these modules have options marked with `removed_in_version='2.9'`. These options should have been removed before Ansible 2.9 was released. Since that is too late, it would be good if they could be removed before Ansible 2.10 is released.
```
lib/ansible/modules/network/aireos/aireos_command.py:0:0: ansible-deprecated-version: Argument 'host' in argument_spec has a deprecated removed_in_version '2.9', i.e. the version is less than or equal to the current version of Ansible (2.10.0.dev0)
lib/ansible/modules/network/aireos/aireos_command.py:0:0: ansible-deprecated-version: Argument 'password' in argument_spec has a deprecated removed_in_version '2.9', i.e. the version is less than or equal to the current version of Ansible (2.10.0.dev0)
lib/ansible/modules/network/aireos/aireos_command.py:0:0: ansible-deprecated-version: Argument 'port' in argument_spec has a deprecated removed_in_version '2.9', i.e. the version is less than or equal to the current version of Ansible (2.10.0.dev0)
lib/ansible/modules/network/aireos/aireos_command.py:0:0: ansible-deprecated-version: Argument 'ssh_keyfile' in argument_spec has a deprecated removed_in_version '2.9', i.e. the version is less than or equal to the current version of Ansible (2.10.0.dev0)
lib/ansible/modules/network/aireos/aireos_command.py:0:0: ansible-deprecated-version: Argument 'timeout' in argument_spec has a deprecated removed_in_version '2.9', i.e. the version is less than or equal to the current version of Ansible (2.10.0.dev0)
lib/ansible/modules/network/aireos/aireos_command.py:0:0: ansible-deprecated-version: Argument 'username' in argument_spec has a deprecated removed_in_version '2.9', i.e. the version is less than or equal to the current version of Ansible (2.10.0.dev0)
lib/ansible/modules/network/aireos/aireos_config.py:0:0: ansible-deprecated-version: Argument 'host' in argument_spec has a deprecated removed_in_version '2.9', i.e. the version is less than or equal to the current version of Ansible (2.10.0.dev0)
lib/ansible/modules/network/aireos/aireos_config.py:0:0: ansible-deprecated-version: Argument 'password' in argument_spec has a deprecated removed_in_version '2.9', i.e. the version is less than or equal to the current version of Ansible (2.10.0.dev0)
lib/ansible/modules/network/aireos/aireos_config.py:0:0: ansible-deprecated-version: Argument 'port' in argument_spec has a deprecated removed_in_version '2.9', i.e. the version is less than or equal to the current version of Ansible (2.10.0.dev0)
lib/ansible/modules/network/aireos/aireos_config.py:0:0: ansible-deprecated-version: Argument 'ssh_keyfile' in argument_spec has a deprecated removed_in_version '2.9', i.e. the version is less than or equal to the current version of Ansible (2.10.0.dev0)
lib/ansible/modules/network/aireos/aireos_config.py:0:0: ansible-deprecated-version: Argument 'timeout' in argument_spec has a deprecated removed_in_version '2.9', i.e. the version is less than or equal to the current version of Ansible (2.10.0.dev0)
lib/ansible/modules/network/aireos/aireos_config.py:0:0: ansible-deprecated-version: Argument 'username' in argument_spec has a deprecated removed_in_version '2.9', i.e. the version is less than or equal to the current version of Ansible (2.10.0.dev0)
```
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
lib/ansible/modules/network/aireos/aireos_command.py
lib/ansible/modules/network/aireos/aireos_config.py
##### ANSIBLE VERSION
```paste below
2.10
```
| True | some aireos modules have options which should have been removed for Ansible 2.9 - ##### SUMMARY
As detected by https://github.com/ansible/ansible/pull/66920, (some of) these modules have options marked with `removed_in_version='2.9'`. These options should have been removed before Ansible 2.9 was released. Since that is too late, it would be good if they could be removed before Ansible 2.10 is released.
```
lib/ansible/modules/network/aireos/aireos_command.py:0:0: ansible-deprecated-version: Argument 'host' in argument_spec has a deprecated removed_in_version '2.9', i.e. the version is less than or equal to the current version of Ansible (2.10.0.dev0)
lib/ansible/modules/network/aireos/aireos_command.py:0:0: ansible-deprecated-version: Argument 'password' in argument_spec has a deprecated removed_in_version '2.9', i.e. the version is less than or equal to the current version of Ansible (2.10.0.dev0)
lib/ansible/modules/network/aireos/aireos_command.py:0:0: ansible-deprecated-version: Argument 'port' in argument_spec has a deprecated removed_in_version '2.9', i.e. the version is less than or equal to the current version of Ansible (2.10.0.dev0)
lib/ansible/modules/network/aireos/aireos_command.py:0:0: ansible-deprecated-version: Argument 'ssh_keyfile' in argument_spec has a deprecated removed_in_version '2.9', i.e. the version is less than or equal to the current version of Ansible (2.10.0.dev0)
lib/ansible/modules/network/aireos/aireos_command.py:0:0: ansible-deprecated-version: Argument 'timeout' in argument_spec has a deprecated removed_in_version '2.9', i.e. the version is less than or equal to the current version of Ansible (2.10.0.dev0)
lib/ansible/modules/network/aireos/aireos_command.py:0:0: ansible-deprecated-version: Argument 'username' in argument_spec has a deprecated removed_in_version '2.9', i.e. the version is less than or equal to the current version of Ansible (2.10.0.dev0)
lib/ansible/modules/network/aireos/aireos_config.py:0:0: ansible-deprecated-version: Argument 'host' in argument_spec has a deprecated removed_in_version '2.9', i.e. the version is less than or equal to the current version of Ansible (2.10.0.dev0)
lib/ansible/modules/network/aireos/aireos_config.py:0:0: ansible-deprecated-version: Argument 'password' in argument_spec has a deprecated removed_in_version '2.9', i.e. the version is less than or equal to the current version of Ansible (2.10.0.dev0)
lib/ansible/modules/network/aireos/aireos_config.py:0:0: ansible-deprecated-version: Argument 'port' in argument_spec has a deprecated removed_in_version '2.9', i.e. the version is less than or equal to the current version of Ansible (2.10.0.dev0)
lib/ansible/modules/network/aireos/aireos_config.py:0:0: ansible-deprecated-version: Argument 'ssh_keyfile' in argument_spec has a deprecated removed_in_version '2.9', i.e. the version is less than or equal to the current version of Ansible (2.10.0.dev0)
lib/ansible/modules/network/aireos/aireos_config.py:0:0: ansible-deprecated-version: Argument 'timeout' in argument_spec has a deprecated removed_in_version '2.9', i.e. the version is less than or equal to the current version of Ansible (2.10.0.dev0)
lib/ansible/modules/network/aireos/aireos_config.py:0:0: ansible-deprecated-version: Argument 'username' in argument_spec has a deprecated removed_in_version '2.9', i.e. the version is less than or equal to the current version of Ansible (2.10.0.dev0)
```
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
lib/ansible/modules/network/aireos/aireos_command.py
lib/ansible/modules/network/aireos/aireos_config.py
##### ANSIBLE VERSION
```paste below
2.10
```
| main | some aireos modules have options which should have been removed for ansible summary as detected by some of these modules have options marked with removed in version these options should have been removed before ansible was released since that is too late it would be good if they could be removed before ansible is released lib ansible modules network aireos aireos command py ansible deprecated version argument host in argument spec has a deprecated removed in version i e the version is less than or equal to the current version of ansible lib ansible modules network aireos aireos command py ansible deprecated version argument password in argument spec has a deprecated removed in version i e the version is less than or equal to the current version of ansible lib ansible modules network aireos aireos command py ansible deprecated version argument port in argument spec has a deprecated removed in version i e the version is less than or equal to the current version of ansible lib ansible modules network aireos aireos command py ansible deprecated version argument ssh keyfile in argument spec has a deprecated removed in version i e the version is less than or equal to the current version of ansible lib ansible modules network aireos aireos command py ansible deprecated version argument timeout in argument spec has a deprecated removed in version i e the version is less than or equal to the current version of ansible lib ansible modules network aireos aireos command py ansible deprecated version argument username in argument spec has a deprecated removed in version i e the version is less than or equal to the current version of ansible lib ansible modules network aireos aireos config py ansible deprecated version argument host in argument spec has a deprecated removed in version i e the version is less than or equal to the current version of ansible lib ansible modules network aireos aireos config py ansible deprecated version argument password in argument spec has a deprecated removed in version i e the version is less than or equal to the current version of ansible lib ansible modules network aireos aireos config py ansible deprecated version argument port in argument spec has a deprecated removed in version i e the version is less than or equal to the current version of ansible lib ansible modules network aireos aireos config py ansible deprecated version argument ssh keyfile in argument spec has a deprecated removed in version i e the version is less than or equal to the current version of ansible lib ansible modules network aireos aireos config py ansible deprecated version argument timeout in argument spec has a deprecated removed in version i e the version is less than or equal to the current version of ansible lib ansible modules network aireos aireos config py ansible deprecated version argument username in argument spec has a deprecated removed in version i e the version is less than or equal to the current version of ansible issue type bug report component name lib ansible modules network aireos aireos command py lib ansible modules network aireos aireos config py ansible version paste below | 1 |
676 | 4,217,851,942 | IssuesEvent | 2016-06-30 14:21:50 | caskroom/homebrew-cask | https://api.github.com/repos/caskroom/homebrew-cask | closed | Bug report: Ruby error when installing Hashcat 3.00 | awaiting maintainer feedback awaiting user reply bug cask | ### Description of issue
Cannot install cask Hashcat in version 3.00
### Output of `brew cask install hashcat --verbose`
```
==> Satisfying dependencies
==> Installing Formula dependencies from Homebrew
unar ... already installed
complete
==> Downloading https://hashcat.net/files/hashcat-3.00.7z
/usr/bin/curl -fLA Homebrew-cask v0.51+ (Ruby 2.0.0-648) https://hashcat.net/files/hashcat-3.00.7z -C 0 -o /Users/mkowalski/Library/Caches/Homebrew/hashcat-3.00.7z.incomplete
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 2624k 100 2624k 0 0 3055k 0 --:--:-- --:--:-- --:--:-- 3054k
==> Verifying checksum for Cask hashcat
==> ditto: /private/var/folders/0k/x6tgnb5573z8vpg_gjv04vm40000gn/T/d20160630-12520-zqlam3/./hashcat-3.00/rules/hybrid/append_d.rule: Permission denied
Error: Permission denied - /var/folders/0k/x6tgnb5573z8vpg_gjv04vm40000gn/T/d20160630-12520-zqlam3/hashcat-3.00/rules/hybrid/append_d.rule
Most likely, this means you have an outdated version of homebrew-cask. Please run:
brew uninstall --force brew-cask; brew untap phinze/cask; brew update; brew cleanup; brew cask cleanup
If this doesn’t fix the problem, please report this bug:
https://github.com/caskroom/homebrew-cask#reporting-bugs
/System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/2.0.0/fileutils.rb:1439:in `unlink'
/System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/2.0.0/fileutils.rb:1439:in `block in remove_file'
/System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/2.0.0/fileutils.rb:1444:in `platform_support'
/System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/2.0.0/fileutils.rb:1438:in `remove_file'
/System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/2.0.0/fileutils.rb:1427:in `remove'
/System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/2.0.0/fileutils.rb:770:in `block in remove_entry'
/System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/2.0.0/fileutils.rb:1477:in `block (2 levels) in postorder_traverse'
/System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/2.0.0/fileutils.rb:1477:in `block (2 levels) in postorder_traverse'
/System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/2.0.0/fileutils.rb:1477:in `block (2 levels) in postorder_traverse'
/System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/2.0.0/fileutils.rb:1477:in `block (2 levels) in postorder_traverse'
/System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/2.0.0/fileutils.rb:1481:in `postorder_traverse'
/System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/2.0.0/fileutils.rb:1476:in `block in postorder_traverse'
/System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/2.0.0/fileutils.rb:1475:in `each'
/System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/2.0.0/fileutils.rb:1475:in `postorder_traverse'
/System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/2.0.0/fileutils.rb:1476:in `block in postorder_traverse'
/System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/2.0.0/fileutils.rb:1475:in `each'
/System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/2.0.0/fileutils.rb:1475:in `postorder_traverse'
/System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/2.0.0/fileutils.rb:1476:in `block in postorder_traverse'
/System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/2.0.0/fileutils.rb:1475:in `each'
/System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/2.0.0/fileutils.rb:1475:in `postorder_traverse'
/System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/2.0.0/fileutils.rb:1476:in `block in postorder_traverse'
/System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/2.0.0/fileutils.rb:1475:in `each'
/System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/2.0.0/fileutils.rb:1475:in `postorder_traverse'
/System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/2.0.0/fileutils.rb:768:in `remove_entry'
/System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/2.0.0/tmpdir.rb:94:in `ensure in mktmpdir'
/System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/2.0.0/tmpdir.rb:94:in `mktmpdir'
/usr/local/Library/Taps/caskroom/homebrew-cask/lib/hbc/container/generic_unar.rb:13:in `extract'
/usr/local/Library/Taps/caskroom/homebrew-cask/lib/hbc/installer.rb:127:in `extract_primary_container'
/usr/local/Library/Taps/caskroom/homebrew-cask/lib/hbc/installer.rb:76:in `install'
/usr/local/Library/Taps/caskroom/homebrew-cask/lib/hbc/cli/install.rb:23:in `block in install_casks'
/usr/local/Library/Taps/caskroom/homebrew-cask/lib/hbc/cli/install.rb:19:in `each'
/usr/local/Library/Taps/caskroom/homebrew-cask/lib/hbc/cli/install.rb:19:in `install_casks'
/usr/local/Library/Taps/caskroom/homebrew-cask/lib/hbc/cli/install.rb:8:in `run'
/usr/local/Library/Taps/caskroom/homebrew-cask/lib/hbc/cli.rb:83:in `run_command'
/usr/local/Library/Taps/caskroom/homebrew-cask/lib/hbc/cli.rb:121:in `process'
/usr/local/Library/Taps/caskroom/homebrew-cask/cmd/brew-cask.rb:26:in `<top (required)>'
/System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/2.0.0/rubygems/core_ext/kernel_require.rb:55:in `require'
/System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/2.0.0/rubygems/core_ext/kernel_require.rb:55:in `require'
/usr/local/Library/brew.rb:22:in `require?'
/usr/local/Library/brew.rb:93:in `<main>'
Error: Kernel.exit
```
### Output of `brew doctor`
```
Warning: Some directories in /usr/local/share/man aren't writable.
This can happen if you "sudo make install" software that isn't managed
by Homebrew. If a brew tries to add locale information to one of these
directories, then the install will fail during the link step.
You should probably `sudo chown -R $(whoami)` them:
/usr/local/share/man/de
/usr/local/share/man/de/man1
/usr/local/share/man/es
/usr/local/share/man/es/man1
/usr/local/share/man/fr
/usr/local/share/man/fr/man1
/usr/local/share/man/hr
/usr/local/share/man/hr/man1
/usr/local/share/man/hu
/usr/local/share/man/hu/man1
/usr/local/share/man/it
/usr/local/share/man/it/man1
/usr/local/share/man/ja
/usr/local/share/man/ja/man1
/usr/local/share/man/pl
/usr/local/share/man/pl/man1
/usr/local/share/man/pt_BR
/usr/local/share/man/pt_BR/man1
/usr/local/share/man/pt_PT
/usr/local/share/man/pt_PT/man1
/usr/local/share/man/ro
/usr/local/share/man/ro/man1
/usr/local/share/man/ru
/usr/local/share/man/ru/man1
/usr/local/share/man/sk
/usr/local/share/man/sk/man1
/usr/local/share/man/zh
/usr/local/share/man/zh/man1
Warning: Some keg-only formula are linked into the Cellar.
Linking a keg-only formula, such as gettext, into the cellar with
`brew link <formula>` will cause other formulae to detect them during
the `./configure` step. This may cause problems when compiling those
other formulae.
Binaries provided by keg-only formulae may override system binaries
with other strange results.
You may wish to `brew unlink` these brews:
openssl
```
### Output of `brew cask doctor`
```
==> macOS Release:
10.11
==> macOS Release with Patchlevel:
10.11.6
==> Hardware Architecture:
intel-64
==> Ruby Version:
2.0.0-p648
==> Ruby Path:
/System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/bin/ruby
==> Homebrew Version:
Homebrew 0.9.9 (git revision 91a2; last commit 2016-06-30)
Homebrew/homebrew-core (git revision 09a6; last commit 2016-06-30)
==> Homebrew Executable Path:
/usr/local/bin/brew
==> Homebrew Cellar Path:
/usr/local/Cellar
==> Homebrew Repository Path:
/usr/local
==> Homebrew Origin:
https://github.com/Homebrew/brew.git
==> Homebrew-cask Version:
0.60.0 (git revision 992c1f; last commit 10 minutes ago)
==> Homebrew-cask Install Location:
<NONE>
==> Homebrew-cask Staging Location:
/usr/local/Caskroom
==> Homebrew-cask Cached Downloads:
/Users/mkowalski/Library/Caches/Homebrew
/Users/mkowalski/Library/Caches/Homebrew/Casks
2 files, 5.1M (warning: run "brew cask cleanup")
==> Homebrew-cask Default Tap Path:
/usr/local/Library/Taps/caskroom/homebrew-cask
==> Homebrew-cask Alternate Cask Taps:
<NONE>
==> Homebrew-cask Default Tap Cask Count:
3205
==> Contents of $LOAD_PATH:
/usr/local/Library/Taps/caskroom/homebrew-cask/lib
/usr/local/Library/Homebrew
/Library/Ruby/Site/2.0.0
/Library/Ruby/Site/2.0.0/x86_64-darwin15
/Library/Ruby/Site/2.0.0/universal-darwin15
/Library/Ruby/Site
/System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/vendor_ruby/2.0.0
/System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/vendor_ruby/2.0.0/x86_64-darwin15
/System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/vendor_ruby/2.0.0/universal-darwin15
/System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/vendor_ruby
/System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/2.0.0
/System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/2.0.0/x86_64-darwin15
/System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/2.0.0/universal-darwin15
==> Contents of $RUBYLIB Environment Variable:
<NONE>
==> Contents of $RUBYOPT Environment Variable:
<NONE>
==> Contents of $RUBYPATH Environment Variable:
<NONE>
==> Contents of $RBENV_VERSION Environment Variable:
<NONE>
==> Contents of $CHRUBY_VERSION Environment Variable:
<NONE>
==> Contents of $GEM_HOME Environment Variable:
<NONE>
==> Contents of $GEM_PATH Environment Variable:
<NONE>
==> Contents of $BUNDLE_PATH Environment Variable:
<NONE>
==> Contents of $PATH Environment Variable:
PATH="/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin:/Library/TeX/texbin:/usr/local/Library/Taps/caskroom/homebrew-cask/cmd:/usr/local/Library/ENV/scm"
==> Contents of $SHELL Environment Variable:
SHELL="/bin/bash"
==> Contents of Locale Environment Variables:
LC_ALL="en_US.UTF-8"
==> Running As Privileged User:
No
```
| True | Bug report: Ruby error when installing Hashcat 3.00 - ### Description of issue
Cannot install cask Hashcat in version 3.00
### Output of `brew cask install hashcat --verbose`
```
==> Satisfying dependencies
==> Installing Formula dependencies from Homebrew
unar ... already installed
complete
==> Downloading https://hashcat.net/files/hashcat-3.00.7z
/usr/bin/curl -fLA Homebrew-cask v0.51+ (Ruby 2.0.0-648) https://hashcat.net/files/hashcat-3.00.7z -C 0 -o /Users/mkowalski/Library/Caches/Homebrew/hashcat-3.00.7z.incomplete
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 2624k 100 2624k 0 0 3055k 0 --:--:-- --:--:-- --:--:-- 3054k
==> Verifying checksum for Cask hashcat
==> ditto: /private/var/folders/0k/x6tgnb5573z8vpg_gjv04vm40000gn/T/d20160630-12520-zqlam3/./hashcat-3.00/rules/hybrid/append_d.rule: Permission denied
Error: Permission denied - /var/folders/0k/x6tgnb5573z8vpg_gjv04vm40000gn/T/d20160630-12520-zqlam3/hashcat-3.00/rules/hybrid/append_d.rule
Most likely, this means you have an outdated version of homebrew-cask. Please run:
brew uninstall --force brew-cask; brew untap phinze/cask; brew update; brew cleanup; brew cask cleanup
If this doesn’t fix the problem, please report this bug:
https://github.com/caskroom/homebrew-cask#reporting-bugs
/System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/2.0.0/fileutils.rb:1439:in `unlink'
/System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/2.0.0/fileutils.rb:1439:in `block in remove_file'
/System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/2.0.0/fileutils.rb:1444:in `platform_support'
/System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/2.0.0/fileutils.rb:1438:in `remove_file'
/System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/2.0.0/fileutils.rb:1427:in `remove'
/System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/2.0.0/fileutils.rb:770:in `block in remove_entry'
/System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/2.0.0/fileutils.rb:1477:in `block (2 levels) in postorder_traverse'
/System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/2.0.0/fileutils.rb:1477:in `block (2 levels) in postorder_traverse'
/System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/2.0.0/fileutils.rb:1477:in `block (2 levels) in postorder_traverse'
/System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/2.0.0/fileutils.rb:1477:in `block (2 levels) in postorder_traverse'
/System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/2.0.0/fileutils.rb:1481:in `postorder_traverse'
/System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/2.0.0/fileutils.rb:1476:in `block in postorder_traverse'
/System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/2.0.0/fileutils.rb:1475:in `each'
/System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/2.0.0/fileutils.rb:1475:in `postorder_traverse'
/System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/2.0.0/fileutils.rb:1476:in `block in postorder_traverse'
/System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/2.0.0/fileutils.rb:1475:in `each'
/System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/2.0.0/fileutils.rb:1475:in `postorder_traverse'
/System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/2.0.0/fileutils.rb:1476:in `block in postorder_traverse'
/System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/2.0.0/fileutils.rb:1475:in `each'
/System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/2.0.0/fileutils.rb:1475:in `postorder_traverse'
/System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/2.0.0/fileutils.rb:1476:in `block in postorder_traverse'
/System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/2.0.0/fileutils.rb:1475:in `each'
/System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/2.0.0/fileutils.rb:1475:in `postorder_traverse'
/System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/2.0.0/fileutils.rb:768:in `remove_entry'
/System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/2.0.0/tmpdir.rb:94:in `ensure in mktmpdir'
/System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/2.0.0/tmpdir.rb:94:in `mktmpdir'
/usr/local/Library/Taps/caskroom/homebrew-cask/lib/hbc/container/generic_unar.rb:13:in `extract'
/usr/local/Library/Taps/caskroom/homebrew-cask/lib/hbc/installer.rb:127:in `extract_primary_container'
/usr/local/Library/Taps/caskroom/homebrew-cask/lib/hbc/installer.rb:76:in `install'
/usr/local/Library/Taps/caskroom/homebrew-cask/lib/hbc/cli/install.rb:23:in `block in install_casks'
/usr/local/Library/Taps/caskroom/homebrew-cask/lib/hbc/cli/install.rb:19:in `each'
/usr/local/Library/Taps/caskroom/homebrew-cask/lib/hbc/cli/install.rb:19:in `install_casks'
/usr/local/Library/Taps/caskroom/homebrew-cask/lib/hbc/cli/install.rb:8:in `run'
/usr/local/Library/Taps/caskroom/homebrew-cask/lib/hbc/cli.rb:83:in `run_command'
/usr/local/Library/Taps/caskroom/homebrew-cask/lib/hbc/cli.rb:121:in `process'
/usr/local/Library/Taps/caskroom/homebrew-cask/cmd/brew-cask.rb:26:in `<top (required)>'
/System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/2.0.0/rubygems/core_ext/kernel_require.rb:55:in `require'
/System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/2.0.0/rubygems/core_ext/kernel_require.rb:55:in `require'
/usr/local/Library/brew.rb:22:in `require?'
/usr/local/Library/brew.rb:93:in `<main>'
Error: Kernel.exit
```
### Output of `brew doctor`
```
Warning: Some directories in /usr/local/share/man aren't writable.
This can happen if you "sudo make install" software that isn't managed
by Homebrew. If a brew tries to add locale information to one of these
directories, then the install will fail during the link step.
You should probably `sudo chown -R $(whoami)` them:
/usr/local/share/man/de
/usr/local/share/man/de/man1
/usr/local/share/man/es
/usr/local/share/man/es/man1
/usr/local/share/man/fr
/usr/local/share/man/fr/man1
/usr/local/share/man/hr
/usr/local/share/man/hr/man1
/usr/local/share/man/hu
/usr/local/share/man/hu/man1
/usr/local/share/man/it
/usr/local/share/man/it/man1
/usr/local/share/man/ja
/usr/local/share/man/ja/man1
/usr/local/share/man/pl
/usr/local/share/man/pl/man1
/usr/local/share/man/pt_BR
/usr/local/share/man/pt_BR/man1
/usr/local/share/man/pt_PT
/usr/local/share/man/pt_PT/man1
/usr/local/share/man/ro
/usr/local/share/man/ro/man1
/usr/local/share/man/ru
/usr/local/share/man/ru/man1
/usr/local/share/man/sk
/usr/local/share/man/sk/man1
/usr/local/share/man/zh
/usr/local/share/man/zh/man1
Warning: Some keg-only formula are linked into the Cellar.
Linking a keg-only formula, such as gettext, into the cellar with
`brew link <formula>` will cause other formulae to detect them during
the `./configure` step. This may cause problems when compiling those
other formulae.
Binaries provided by keg-only formulae may override system binaries
with other strange results.
You may wish to `brew unlink` these brews:
openssl
```
### Output of `brew cask doctor`
```
==> macOS Release:
10.11
==> macOS Release with Patchlevel:
10.11.6
==> Hardware Architecture:
intel-64
==> Ruby Version:
2.0.0-p648
==> Ruby Path:
/System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/bin/ruby
==> Homebrew Version:
Homebrew 0.9.9 (git revision 91a2; last commit 2016-06-30)
Homebrew/homebrew-core (git revision 09a6; last commit 2016-06-30)
==> Homebrew Executable Path:
/usr/local/bin/brew
==> Homebrew Cellar Path:
/usr/local/Cellar
==> Homebrew Repository Path:
/usr/local
==> Homebrew Origin:
https://github.com/Homebrew/brew.git
==> Homebrew-cask Version:
0.60.0 (git revision 992c1f; last commit 10 minutes ago)
==> Homebrew-cask Install Location:
<NONE>
==> Homebrew-cask Staging Location:
/usr/local/Caskroom
==> Homebrew-cask Cached Downloads:
/Users/mkowalski/Library/Caches/Homebrew
/Users/mkowalski/Library/Caches/Homebrew/Casks
2 files, 5.1M (warning: run "brew cask cleanup")
==> Homebrew-cask Default Tap Path:
/usr/local/Library/Taps/caskroom/homebrew-cask
==> Homebrew-cask Alternate Cask Taps:
<NONE>
==> Homebrew-cask Default Tap Cask Count:
3205
==> Contents of $LOAD_PATH:
/usr/local/Library/Taps/caskroom/homebrew-cask/lib
/usr/local/Library/Homebrew
/Library/Ruby/Site/2.0.0
/Library/Ruby/Site/2.0.0/x86_64-darwin15
/Library/Ruby/Site/2.0.0/universal-darwin15
/Library/Ruby/Site
/System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/vendor_ruby/2.0.0
/System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/vendor_ruby/2.0.0/x86_64-darwin15
/System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/vendor_ruby/2.0.0/universal-darwin15
/System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/vendor_ruby
/System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/2.0.0
/System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/2.0.0/x86_64-darwin15
/System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/2.0.0/universal-darwin15
==> Contents of $RUBYLIB Environment Variable:
<NONE>
==> Contents of $RUBYOPT Environment Variable:
<NONE>
==> Contents of $RUBYPATH Environment Variable:
<NONE>
==> Contents of $RBENV_VERSION Environment Variable:
<NONE>
==> Contents of $CHRUBY_VERSION Environment Variable:
<NONE>
==> Contents of $GEM_HOME Environment Variable:
<NONE>
==> Contents of $GEM_PATH Environment Variable:
<NONE>
==> Contents of $BUNDLE_PATH Environment Variable:
<NONE>
==> Contents of $PATH Environment Variable:
PATH="/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin:/Library/TeX/texbin:/usr/local/Library/Taps/caskroom/homebrew-cask/cmd:/usr/local/Library/ENV/scm"
==> Contents of $SHELL Environment Variable:
SHELL="/bin/bash"
==> Contents of Locale Environment Variables:
LC_ALL="en_US.UTF-8"
==> Running As Privileged User:
No
```
| main | bug report ruby error when installing hashcat description of issue cannot install cask hashcat in version output of brew cask install hashcat verbose satisfying dependencies installing formula dependencies from homebrew unar already installed complete downloading usr bin curl fla homebrew cask ruby c o users mkowalski library caches homebrew hashcat incomplete total received xferd average speed time time time current dload upload total spent left speed verifying checksum for cask hashcat ditto private var folders t hashcat rules hybrid append d rule permission denied error permission denied var folders t hashcat rules hybrid append d rule most likely this means you have an outdated version of homebrew cask please run brew uninstall force brew cask brew untap phinze cask brew update brew cleanup brew cask cleanup if this doesn’t fix the problem please report this bug system library frameworks ruby framework versions usr lib ruby fileutils rb in unlink system library frameworks ruby framework versions usr lib ruby fileutils rb in block in remove file system library frameworks ruby framework versions usr lib ruby fileutils rb in platform support system library frameworks ruby framework versions usr lib ruby fileutils rb in remove file system library frameworks ruby framework versions usr lib ruby fileutils rb in remove system library frameworks ruby framework versions usr lib ruby fileutils rb in block in remove entry system library frameworks ruby framework versions usr lib ruby fileutils rb in block levels in postorder traverse system library frameworks ruby framework versions usr lib ruby fileutils rb in block levels in postorder traverse system library frameworks ruby framework versions usr lib ruby fileutils rb in block levels in postorder traverse system library frameworks ruby framework versions usr lib ruby fileutils rb in block levels in postorder traverse system library frameworks ruby framework versions usr lib ruby fileutils rb in postorder traverse system library frameworks ruby framework versions usr lib ruby fileutils rb in block in postorder traverse system library frameworks ruby framework versions usr lib ruby fileutils rb in each system library frameworks ruby framework versions usr lib ruby fileutils rb in postorder traverse system library frameworks ruby framework versions usr lib ruby fileutils rb in block in postorder traverse system library frameworks ruby framework versions usr lib ruby fileutils rb in each system library frameworks ruby framework versions usr lib ruby fileutils rb in postorder traverse system library frameworks ruby framework versions usr lib ruby fileutils rb in block in postorder traverse system library frameworks ruby framework versions usr lib ruby fileutils rb in each system library frameworks ruby framework versions usr lib ruby fileutils rb in postorder traverse system library frameworks ruby framework versions usr lib ruby fileutils rb in block in postorder traverse system library frameworks ruby framework versions usr lib ruby fileutils rb in each system library frameworks ruby framework versions usr lib ruby fileutils rb in postorder traverse system library frameworks ruby framework versions usr lib ruby fileutils rb in remove entry system library frameworks ruby framework versions usr lib ruby tmpdir rb in ensure in mktmpdir system library frameworks ruby framework versions usr lib ruby tmpdir rb in mktmpdir usr local library taps caskroom homebrew cask lib hbc container generic unar rb in extract usr local library taps caskroom homebrew cask lib hbc installer rb in extract primary container usr local library taps caskroom homebrew cask lib hbc installer rb in install usr local library taps caskroom homebrew cask lib hbc cli install rb in block in install casks usr local library taps caskroom homebrew cask lib hbc cli install rb in each usr local library taps caskroom homebrew cask lib hbc cli install rb in install casks usr local library taps caskroom homebrew cask lib hbc cli install rb in run usr local library taps caskroom homebrew cask lib hbc cli rb in run command usr local library taps caskroom homebrew cask lib hbc cli rb in process usr local library taps caskroom homebrew cask cmd brew cask rb in system library frameworks ruby framework versions usr lib ruby rubygems core ext kernel require rb in require system library frameworks ruby framework versions usr lib ruby rubygems core ext kernel require rb in require usr local library brew rb in require usr local library brew rb in error kernel exit output of brew doctor warning some directories in usr local share man aren t writable this can happen if you sudo make install software that isn t managed by homebrew if a brew tries to add locale information to one of these directories then the install will fail during the link step you should probably sudo chown r whoami them usr local share man de usr local share man de usr local share man es usr local share man es usr local share man fr usr local share man fr usr local share man hr usr local share man hr usr local share man hu usr local share man hu usr local share man it usr local share man it usr local share man ja usr local share man ja usr local share man pl usr local share man pl usr local share man pt br usr local share man pt br usr local share man pt pt usr local share man pt pt usr local share man ro usr local share man ro usr local share man ru usr local share man ru usr local share man sk usr local share man sk usr local share man zh usr local share man zh warning some keg only formula are linked into the cellar linking a keg only formula such as gettext into the cellar with brew link will cause other formulae to detect them during the configure step this may cause problems when compiling those other formulae binaries provided by keg only formulae may override system binaries with other strange results you may wish to brew unlink these brews openssl output of brew cask doctor macos release macos release with patchlevel hardware architecture intel ruby version ruby path system library frameworks ruby framework versions usr bin ruby homebrew version homebrew git revision last commit homebrew homebrew core git revision last commit homebrew executable path usr local bin brew homebrew cellar path usr local cellar homebrew repository path usr local homebrew origin homebrew cask version git revision last commit minutes ago homebrew cask install location homebrew cask staging location usr local caskroom homebrew cask cached downloads users mkowalski library caches homebrew users mkowalski library caches homebrew casks files warning run brew cask cleanup homebrew cask default tap path usr local library taps caskroom homebrew cask homebrew cask alternate cask taps homebrew cask default tap cask count contents of load path usr local library taps caskroom homebrew cask lib usr local library homebrew library ruby site library ruby site library ruby site universal library ruby site system library frameworks ruby framework versions usr lib ruby vendor ruby system library frameworks ruby framework versions usr lib ruby vendor ruby system library frameworks ruby framework versions usr lib ruby vendor ruby universal system library frameworks ruby framework versions usr lib ruby vendor ruby system library frameworks ruby framework versions usr lib ruby system library frameworks ruby framework versions usr lib ruby system library frameworks ruby framework versions usr lib ruby universal contents of rubylib environment variable contents of rubyopt environment variable contents of rubypath environment variable contents of rbenv version environment variable contents of chruby version environment variable contents of gem home environment variable contents of gem path environment variable contents of bundle path environment variable contents of path environment variable path usr local bin usr bin bin usr sbin sbin library tex texbin usr local library taps caskroom homebrew cask cmd usr local library env scm contents of shell environment variable shell bin bash contents of locale environment variables lc all en us utf running as privileged user no | 1 |
70,127 | 9,378,275,179 | IssuesEvent | 2019-04-04 12:31:24 | DiODeProject/MuMoT | https://api.github.com/repos/DiODeProject/MuMoT | opened | Consider pointing binder icon to latest release branch | documentation enhancement question | Currently the binder icon in `README.md` and `getting_started.rst` point to the master branch. It may be better to have these point to the most recent release branch instead. If this is agreed then the following actions need to be taken:
- [ ] add a list of previous binder references, possibly under `mybinder.org` in `about.rst`
- [ ] update `development.rst` to include instructions on bumping the binder branch reference in `README.md` and `getting_started.rst`, and adding the previous binder branch reference to the list of previous releases, to enable ongoing access | 1.0 | Consider pointing binder icon to latest release branch - Currently the binder icon in `README.md` and `getting_started.rst` point to the master branch. It may be better to have these point to the most recent release branch instead. If this is agreed then the following actions need to be taken:
- [ ] add a list of previous binder references, possibly under `mybinder.org` in `about.rst`
- [ ] update `development.rst` to include instructions on bumping the binder branch reference in `README.md` and `getting_started.rst`, and adding the previous binder branch reference to the list of previous releases, to enable ongoing access | non_main | consider pointing binder icon to latest release branch currently the binder icon in readme md and getting started rst point to the master branch it may be better to have these point to the most recent release branch instead if this is agreed then the following actions need to be taken add a list of previous binder references possibly under mybinder org in about rst update development rst to include instructions on bumping the binder branch reference in readme md and getting started rst and adding the previous binder branch reference to the list of previous releases to enable ongoing access | 0 |
250,737 | 27,111,272,252 | IssuesEvent | 2023-02-15 15:28:23 | EliyaC/NodeGoat | https://api.github.com/repos/EliyaC/NodeGoat | closed | CVE-2017-16138 (High) detected in mime-1.2.11.tgz - autoclosed | security vulnerability | ## CVE-2017-16138 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>mime-1.2.11.tgz</b></p></summary>
<p>A comprehensive library for mime-type mapping</p>
<p>Library home page: <a href="https://registry.npmjs.org/mime/-/mime-1.2.11.tgz">https://registry.npmjs.org/mime/-/mime-1.2.11.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/zaproxy/node_modules/mime/package.json</p>
<p>
Dependency Hierarchy:
- zaproxy-0.2.0.tgz (Root Library)
- request-2.36.0.tgz
- :x: **mime-1.2.11.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/EliyaC/NodeGoat/commit/2f9ac315d9e05728b7ce26ce7cf1b4e684e54fde">2f9ac315d9e05728b7ce26ce7cf1b4e684e54fde</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The mime module < 1.4.1, 2.0.1, 2.0.2 is vulnerable to regular expression denial of service when a mime lookup is performed on untrusted user input.
<p>Publish Date: 2018-06-07
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2017-16138>CVE-2017-16138</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2017-16138">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2017-16138</a></p>
<p>Release Date: 2018-06-07</p>
<p>Fix Resolution (mime): 1.4.1</p>
<p>Direct dependency fix Resolution (zaproxy): 0.3.0</p>
</p>
</details>
<p></p>
***
:rescue_worker_helmet: Automatic Remediation is available for this issue | True | CVE-2017-16138 (High) detected in mime-1.2.11.tgz - autoclosed - ## CVE-2017-16138 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>mime-1.2.11.tgz</b></p></summary>
<p>A comprehensive library for mime-type mapping</p>
<p>Library home page: <a href="https://registry.npmjs.org/mime/-/mime-1.2.11.tgz">https://registry.npmjs.org/mime/-/mime-1.2.11.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/zaproxy/node_modules/mime/package.json</p>
<p>
Dependency Hierarchy:
- zaproxy-0.2.0.tgz (Root Library)
- request-2.36.0.tgz
- :x: **mime-1.2.11.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/EliyaC/NodeGoat/commit/2f9ac315d9e05728b7ce26ce7cf1b4e684e54fde">2f9ac315d9e05728b7ce26ce7cf1b4e684e54fde</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The mime module < 1.4.1, 2.0.1, 2.0.2 is vulnerable to regular expression denial of service when a mime lookup is performed on untrusted user input.
<p>Publish Date: 2018-06-07
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2017-16138>CVE-2017-16138</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2017-16138">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2017-16138</a></p>
<p>Release Date: 2018-06-07</p>
<p>Fix Resolution (mime): 1.4.1</p>
<p>Direct dependency fix Resolution (zaproxy): 0.3.0</p>
</p>
</details>
<p></p>
***
:rescue_worker_helmet: Automatic Remediation is available for this issue | non_main | cve high detected in mime tgz autoclosed cve high severity vulnerability vulnerable library mime tgz a comprehensive library for mime type mapping library home page a href path to dependency file package json path to vulnerable library node modules zaproxy node modules mime package json dependency hierarchy zaproxy tgz root library request tgz x mime tgz vulnerable library found in head commit a href found in base branch master vulnerability details the mime module is vulnerable to regular expression denial of service when a mime lookup is performed on untrusted user input publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution mime direct dependency fix resolution zaproxy rescue worker helmet automatic remediation is available for this issue | 0 |
4,248 | 21,056,694,515 | IssuesEvent | 2022-04-01 04:35:14 | HPCL/code-analysis | https://api.github.com/repos/HPCL/code-analysis | closed | CWE-1087 Class with Virtual Method without a Virtual Destructor | IN-PROGRESS CLAIMED ISO/IEC 5055:2021 Class VirtualMethod WEAKNESS CATEGORY: MAINTAINABILITY | **Usage Name**
Class with virtual method missing destructor
**Reference**
[https://cwe.mitre.org/data/definitions/1087](https://cwe.mitre.org/data/definitions/1087)
**Roles**
- the *Class*
- the *VirtualMethod*
**Detection Patterns**
- 8.2.38 ASCQM Implement Virtual Destructor for Classes with Virtual Methods | True | CWE-1087 Class with Virtual Method without a Virtual Destructor - **Usage Name**
Class with virtual method missing destructor
**Reference**
[https://cwe.mitre.org/data/definitions/1087](https://cwe.mitre.org/data/definitions/1087)
**Roles**
- the *Class*
- the *VirtualMethod*
**Detection Patterns**
- 8.2.38 ASCQM Implement Virtual Destructor for Classes with Virtual Methods | main | cwe class with virtual method without a virtual destructor usage name class with virtual method missing destructor reference roles the class the virtualmethod detection patterns ascqm implement virtual destructor for classes with virtual methods | 1 |
63,090 | 14,656,667,640 | IssuesEvent | 2020-12-28 13:56:18 | fu1771695yongxie/learnGitBranching | https://api.github.com/repos/fu1771695yongxie/learnGitBranching | opened | WS-2018-0148 (Low) detected in utile-0.3.0.tgz | security vulnerability | ## WS-2018-0148 - Low Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>utile-0.3.0.tgz</b></p></summary>
<p>A drop-in replacement for `util` with some additional advantageous functions</p>
<p>Library home page: <a href="https://registry.npmjs.org/utile/-/utile-0.3.0.tgz">https://registry.npmjs.org/utile/-/utile-0.3.0.tgz</a></p>
<p>Path to dependency file: learnGitBranching/package.json</p>
<p>Path to vulnerable library: learnGitBranching/node_modules/utile/package.json</p>
<p>
Dependency Hierarchy:
- prompt-1.1.0.tgz (Root Library)
- :x: **utile-0.3.0.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/fu1771695yongxie/learnGitBranching/commit/33cba5147b9149e15d524f7a0f485cf33acd1c2b">33cba5147b9149e15d524f7a0f485cf33acd1c2b</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/low_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
`utile` allocates uninitialized Buffers when number is passed in input.
Before version 0.3.0
<p>Publish Date: 2018-07-16
<p>URL: <a href=https://hackerone.com/reports/321701>WS-2018-0148</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 2 Score Details (<b>1.8</b>)</summary>
<p>
Base Score Metrics not available</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | WS-2018-0148 (Low) detected in utile-0.3.0.tgz - ## WS-2018-0148 - Low Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>utile-0.3.0.tgz</b></p></summary>
<p>A drop-in replacement for `util` with some additional advantageous functions</p>
<p>Library home page: <a href="https://registry.npmjs.org/utile/-/utile-0.3.0.tgz">https://registry.npmjs.org/utile/-/utile-0.3.0.tgz</a></p>
<p>Path to dependency file: learnGitBranching/package.json</p>
<p>Path to vulnerable library: learnGitBranching/node_modules/utile/package.json</p>
<p>
Dependency Hierarchy:
- prompt-1.1.0.tgz (Root Library)
- :x: **utile-0.3.0.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/fu1771695yongxie/learnGitBranching/commit/33cba5147b9149e15d524f7a0f485cf33acd1c2b">33cba5147b9149e15d524f7a0f485cf33acd1c2b</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/low_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
`utile` allocates uninitialized Buffers when number is passed in input.
Before version 0.3.0
<p>Publish Date: 2018-07-16
<p>URL: <a href=https://hackerone.com/reports/321701>WS-2018-0148</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 2 Score Details (<b>1.8</b>)</summary>
<p>
Base Score Metrics not available</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_main | ws low detected in utile tgz ws low severity vulnerability vulnerable library utile tgz a drop in replacement for util with some additional advantageous functions library home page a href path to dependency file learngitbranching package json path to vulnerable library learngitbranching node modules utile package json dependency hierarchy prompt tgz root library x utile tgz vulnerable library found in head commit a href found in base branch master vulnerability details utile allocates uninitialized buffers when number is passed in input before version publish date url a href cvss score details base score metrics not available step up your open source security game with whitesource | 0 |
423,248 | 28,502,522,755 | IssuesEvent | 2023-04-18 18:30:50 | RoAlfonsin/text-based-adventure-i | https://api.github.com/repos/RoAlfonsin/text-based-adventure-i | closed | Create documentation and refactor code | documentation enhancement | Create documentation and refactor to make everything more clear | 1.0 | Create documentation and refactor code - Create documentation and refactor to make everything more clear | non_main | create documentation and refactor code create documentation and refactor to make everything more clear | 0 |
184,506 | 21,784,904,613 | IssuesEvent | 2022-05-14 01:45:10 | rvvergara/expensify | https://api.github.com/repos/rvvergara/expensify | closed | CVE-2020-7608 (Medium) detected in yargs-parser-5.0.0.tgz, yargs-parser-11.1.1.tgz - autoclosed | security vulnerability | ## CVE-2020-7608 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>yargs-parser-5.0.0.tgz</b>, <b>yargs-parser-11.1.1.tgz</b></p></summary>
<p>
<details><summary><b>yargs-parser-5.0.0.tgz</b></p></summary>
<p>the mighty option parser used by yargs</p>
<p>Library home page: <a href="https://registry.npmjs.org/yargs-parser/-/yargs-parser-5.0.0.tgz">https://registry.npmjs.org/yargs-parser/-/yargs-parser-5.0.0.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/yargs-parser/package.json</p>
<p>
Dependency Hierarchy:
- node-sass-4.11.0.tgz (Root Library)
- sass-graph-2.2.4.tgz
- yargs-7.1.0.tgz
- :x: **yargs-parser-5.0.0.tgz** (Vulnerable Library)
</details>
<details><summary><b>yargs-parser-11.1.1.tgz</b></p></summary>
<p>the mighty option parser used by yargs</p>
<p>Library home page: <a href="https://registry.npmjs.org/yargs-parser/-/yargs-parser-11.1.1.tgz">https://registry.npmjs.org/yargs-parser/-/yargs-parser-11.1.1.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/yargs-parser/package.json</p>
<p>
Dependency Hierarchy:
- webpack-dev-server-3.3.1.tgz (Root Library)
- yargs-12.0.5.tgz
- :x: **yargs-parser-11.1.1.tgz** (Vulnerable Library)
</details>
<p>Found in HEAD commit: <a href="https://github.com/rvvergara/expensify/commit/fdfd5fe0d2a536540aa7d35163ec94f119bc53f0">fdfd5fe0d2a536540aa7d35163ec94f119bc53f0</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
yargs-parser could be tricked into adding or modifying properties of Object.prototype using a "__proto__" payload.
<p>Publish Date: 2020-03-16
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-7608>CVE-2020-7608</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.3</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: Low
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/yargs/yargs-parser/commit/63810ca1ae1a24b08293a4d971e70e058c7a41e2">https://github.com/yargs/yargs-parser/commit/63810ca1ae1a24b08293a4d971e70e058c7a41e2</a></p>
<p>Release Date: 2020-03-16</p>
<p>Fix Resolution (yargs-parser): 5.0.1</p>
<p>Direct dependency fix Resolution (node-sass): 4.12.0</p><p>Fix Resolution (yargs-parser): 13.1.2</p>
<p>Direct dependency fix Resolution (webpack-dev-server): 3.11.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2020-7608 (Medium) detected in yargs-parser-5.0.0.tgz, yargs-parser-11.1.1.tgz - autoclosed - ## CVE-2020-7608 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>yargs-parser-5.0.0.tgz</b>, <b>yargs-parser-11.1.1.tgz</b></p></summary>
<p>
<details><summary><b>yargs-parser-5.0.0.tgz</b></p></summary>
<p>the mighty option parser used by yargs</p>
<p>Library home page: <a href="https://registry.npmjs.org/yargs-parser/-/yargs-parser-5.0.0.tgz">https://registry.npmjs.org/yargs-parser/-/yargs-parser-5.0.0.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/yargs-parser/package.json</p>
<p>
Dependency Hierarchy:
- node-sass-4.11.0.tgz (Root Library)
- sass-graph-2.2.4.tgz
- yargs-7.1.0.tgz
- :x: **yargs-parser-5.0.0.tgz** (Vulnerable Library)
</details>
<details><summary><b>yargs-parser-11.1.1.tgz</b></p></summary>
<p>the mighty option parser used by yargs</p>
<p>Library home page: <a href="https://registry.npmjs.org/yargs-parser/-/yargs-parser-11.1.1.tgz">https://registry.npmjs.org/yargs-parser/-/yargs-parser-11.1.1.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/yargs-parser/package.json</p>
<p>
Dependency Hierarchy:
- webpack-dev-server-3.3.1.tgz (Root Library)
- yargs-12.0.5.tgz
- :x: **yargs-parser-11.1.1.tgz** (Vulnerable Library)
</details>
<p>Found in HEAD commit: <a href="https://github.com/rvvergara/expensify/commit/fdfd5fe0d2a536540aa7d35163ec94f119bc53f0">fdfd5fe0d2a536540aa7d35163ec94f119bc53f0</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
yargs-parser could be tricked into adding or modifying properties of Object.prototype using a "__proto__" payload.
<p>Publish Date: 2020-03-16
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-7608>CVE-2020-7608</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.3</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: Low
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/yargs/yargs-parser/commit/63810ca1ae1a24b08293a4d971e70e058c7a41e2">https://github.com/yargs/yargs-parser/commit/63810ca1ae1a24b08293a4d971e70e058c7a41e2</a></p>
<p>Release Date: 2020-03-16</p>
<p>Fix Resolution (yargs-parser): 5.0.1</p>
<p>Direct dependency fix Resolution (node-sass): 4.12.0</p><p>Fix Resolution (yargs-parser): 13.1.2</p>
<p>Direct dependency fix Resolution (webpack-dev-server): 3.11.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_main | cve medium detected in yargs parser tgz yargs parser tgz autoclosed cve medium severity vulnerability vulnerable libraries yargs parser tgz yargs parser tgz yargs parser tgz the mighty option parser used by yargs library home page a href path to dependency file package json path to vulnerable library node modules yargs parser package json dependency hierarchy node sass tgz root library sass graph tgz yargs tgz x yargs parser tgz vulnerable library yargs parser tgz the mighty option parser used by yargs library home page a href path to dependency file package json path to vulnerable library node modules yargs parser package json dependency hierarchy webpack dev server tgz root library yargs tgz x yargs parser tgz vulnerable library found in head commit a href vulnerability details yargs parser could be tricked into adding or modifying properties of object prototype using a proto payload publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required low user interaction none scope unchanged impact metrics confidentiality impact low integrity impact low availability impact low for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution yargs parser direct dependency fix resolution node sass fix resolution yargs parser direct dependency fix resolution webpack dev server step up your open source security game with whitesource | 0 |
5,401 | 27,115,670,395 | IssuesEvent | 2023-02-15 18:22:01 | VA-Explorer/va_explorer | https://api.github.com/repos/VA-Explorer/va_explorer | closed | Make BrowserTest's setUpClass headless | Type: Maintainance Domain: Deployment/ Integration Status: Inactive | **What is the expected state?**
BrowserTests setUpClass should be headless, so it can be run on CI.
**What is the actual state?**
The setUpClass is currently not headless, so it cannot be run on CI.
**Relevant context**
- **```va_explorer/va_explorer/users/tests/test_user_create_browser.py```**
The comment above this method also included the following question as well:
> TODO: Needs to be headless to run on CI. Any value to running non-headless locally?
| True | Make BrowserTest's setUpClass headless - **What is the expected state?**
BrowserTests setUpClass should be headless, so it can be run on CI.
**What is the actual state?**
The setUpClass is currently not headless, so it cannot be run on CI.
**Relevant context**
- **```va_explorer/va_explorer/users/tests/test_user_create_browser.py```**
The comment above this method also included the following question as well:
> TODO: Needs to be headless to run on CI. Any value to running non-headless locally?
| main | make browsertest s setupclass headless what is the expected state browsertests setupclass should be headless so it can be run on ci what is the actual state the setupclass is currently not headless so it cannot be run on ci relevant context va explorer va explorer users tests test user create browser py the comment above this method also included the following question as well todo needs to be headless to run on ci any value to running non headless locally | 1 |
1,205 | 5,143,265,351 | IssuesEvent | 2017-01-12 15:35:21 | Particular/NServiceBus.Host.AzureCloudService | https://api.github.com/repos/Particular/NServiceBus.Host.AzureCloudService | closed | V7 RTM | Tag: Maintainer Prio | ## Items to complete
- ~~Change package author name -> use updated NugetPackage https://github.com/Particular/V6Launch/issues/4~~ not needed
- [ ] Create release notes (general ones, similar to the [Core ones with milestones](https://github.com/Particular/V6Launch/issues/75#issuecomment-251098093))
- [ ] Update [V6Launch status list](https://github.com/Particular/V6Launch/issues/4)
| True | V7 RTM - ## Items to complete
- ~~Change package author name -> use updated NugetPackage https://github.com/Particular/V6Launch/issues/4~~ not needed
- [ ] Create release notes (general ones, similar to the [Core ones with milestones](https://github.com/Particular/V6Launch/issues/75#issuecomment-251098093))
- [ ] Update [V6Launch status list](https://github.com/Particular/V6Launch/issues/4)
| main | rtm items to complete change package author name use updated nugetpackage not needed create release notes general ones similar to the update | 1 |
4,127 | 19,569,633,279 | IssuesEvent | 2022-01-04 08:14:24 | keptn/community | https://api.github.com/repos/keptn/community | opened | REQUEST: Maintainer status for @oleg-nenashev | membership:maintainer | I would like to request the maintainer status in Keptn. I have been an active contributor since July 2021, and currently I am helping with community management and Keptn growth. I do not contribute much to the Keptn production codebase but I focus on other areas: public roadmap, issue backlog, documentation and tutorials, Keptn adoption, contributor guidelines, community marketing, etc. All of that is an important part of the Keptn community!
My main use-cases for the maintainer status are:
- GitHub org and repository administration so that I can help other maintainers with processing requests and improving the community
- Access to the CNCF support portal o that I can handle CNCF interaction on my own for the ecosystem, e.g. community.cncf.io, Maintainer List, LFX Mentorship, LFX Insights, Slack, Core Infrastructure Initiative, etc.
- Representing the Keptn project officially in the CNCF and other communities, e.g. for Keptn incubation or technical partnerships
### GitHub Username
@oleg-nenashev
### Membership level
maintainer
### Requirements (for member)
- [x] I have reviewed the community membership guidelines (https://github.com/keptn/community/blob/master/COMMUNITY_MEMBERSHIP.md)
- [x] I have enabled 2FA on my GitHub account. See https://github.com/settings/security
- [x] I have subscribed to the [Keptn Slack channel](http://slack.keptn.sh/)
- [x] I am actively contributing to 1 or more Keptn subprojects
- [ ] I have two sponsors that meet the sponsor requirements listed in the community membership guidelines. Among other requirements, sponsors must be approvers or maintainers of at least one repository in the organization and not both affiliated with the same company
- [x] I have spoken to 2 sponsors (approvers or maintainers) ahead of this application, and they have agreed to sponsor my application
- [x] I have filed a PR to add myself as [project member](https://github.com/keptn/keptn/blob/master/MAINTAINERS) and referenced this issue in the PR
### Requirements (for approver, if applicable)
- [x] I am a reviewer of the codebase for at least 1 month
- [x] I am a reviewer for/or author of substantial PRs to the codebase, with the definition of substantial subject to the maintainer's discretion (e.g., refactors/adds new functionality rather than one-line changes).
- [x] I have spoken to 2 sponsors (maintainers) ahead of this application, and they have agreed to sponsor my application
- [x] I have filed a PR to add myself as [project approver](https://github.com/keptn/keptn/blob/master/MAINTAINERS) and referenced this issue in the PR
### Requirements (for maintainer, if applicable)
- [x] I have deep understanding of the technical goals and direction of the subproject
- [x] I have deep understanding of the technical domain (specifically the language) of the subproject
- [x] I did sustained contributions to design and direction by doing all of:
- [x] I am authoring and reviewing proposals
- [x] I am initiating, contributing, and resolving discussions (e-mails, GitHub issues, meetings)
- [x] I am identifying subtle or complex issues in designs and implementation PRs
- [x] I am directly contributing to the subproject through implementation and / or review
- [x] I am aligning with the overall project goals, specifications, and design principles. I am bringing general questions and requests to the discussions as part of the specifications project.
- [x] I have spoken to 2 sponsors (maintainers) ahead of this application, and they have agreed to sponsor my application
- [x] I have filed a PR to add myself as [project maintainer](https://github.com/keptn/keptn/blob/master/MAINTAINERS) and referenced this issue in the PR
### Sponsors
<!-- Replace (at) with the `@` sign -->
- @grabnerandi
- @AloisReitbauer
- @johannes-b
- @thisthat
- @christian-kreuzberger-dtx
Each sponsor should reply to this issue with the comment "*I support*".
Please remember, it is an applicant's responsibility to get their sponsors' confirmation before submitting the request.
### List of contributions to the Keptn project
- PRs reviewed / authored
- Filed pull requests: https://github.com/search?q=org%3Akeptn+org%3Akeptn-sandbox+org%3Akeptn-integrations+is%3Apr+author%3Aoleg-nenashev&type=issues . Key authored ones:
- https://github.com/keptn/keptn/pull/6405
- https://github.com/keptn/keptn.github.io/pull/989
- https://github.com/keptn/tutorials/pull/209
- Reviewed PRs: https://github.com/search?q=org%3Akeptn+org%3Akeptn-sandbox+org%3Akeptn-integrations+is%3Apr+reviewed-by%3Aoleg-nenashev&type=issues
- Issues responded to
- Keptn RFEs relevant to the community feedback and requests
- Keptn community roadmap on https://github.com/keptn/community/issues
- User support in the `#help` channel
- SIG projects I am involved with
- CDF: Events SIG (Keptn Events is a part of it), Interoperability SIG
- Many Jenkins SIGs | True | REQUEST: Maintainer status for @oleg-nenashev - I would like to request the maintainer status in Keptn. I have been an active contributor since July 2021, and currently I am helping with community management and Keptn growth. I do not contribute much to the Keptn production codebase but I focus on other areas: public roadmap, issue backlog, documentation and tutorials, Keptn adoption, contributor guidelines, community marketing, etc. All of that is an important part of the Keptn community!
My main use-cases for the maintainer status are:
- GitHub org and repository administration so that I can help other maintainers with processing requests and improving the community
- Access to the CNCF support portal o that I can handle CNCF interaction on my own for the ecosystem, e.g. community.cncf.io, Maintainer List, LFX Mentorship, LFX Insights, Slack, Core Infrastructure Initiative, etc.
- Representing the Keptn project officially in the CNCF and other communities, e.g. for Keptn incubation or technical partnerships
### GitHub Username
@oleg-nenashev
### Membership level
maintainer
### Requirements (for member)
- [x] I have reviewed the community membership guidelines (https://github.com/keptn/community/blob/master/COMMUNITY_MEMBERSHIP.md)
- [x] I have enabled 2FA on my GitHub account. See https://github.com/settings/security
- [x] I have subscribed to the [Keptn Slack channel](http://slack.keptn.sh/)
- [x] I am actively contributing to 1 or more Keptn subprojects
- [ ] I have two sponsors that meet the sponsor requirements listed in the community membership guidelines. Among other requirements, sponsors must be approvers or maintainers of at least one repository in the organization and not both affiliated with the same company
- [x] I have spoken to 2 sponsors (approvers or maintainers) ahead of this application, and they have agreed to sponsor my application
- [x] I have filed a PR to add myself as [project member](https://github.com/keptn/keptn/blob/master/MAINTAINERS) and referenced this issue in the PR
### Requirements (for approver, if applicable)
- [x] I am a reviewer of the codebase for at least 1 month
- [x] I am a reviewer for/or author of substantial PRs to the codebase, with the definition of substantial subject to the maintainer's discretion (e.g., refactors/adds new functionality rather than one-line changes).
- [x] I have spoken to 2 sponsors (maintainers) ahead of this application, and they have agreed to sponsor my application
- [x] I have filed a PR to add myself as [project approver](https://github.com/keptn/keptn/blob/master/MAINTAINERS) and referenced this issue in the PR
### Requirements (for maintainer, if applicable)
- [x] I have deep understanding of the technical goals and direction of the subproject
- [x] I have deep understanding of the technical domain (specifically the language) of the subproject
- [x] I did sustained contributions to design and direction by doing all of:
- [x] I am authoring and reviewing proposals
- [x] I am initiating, contributing, and resolving discussions (e-mails, GitHub issues, meetings)
- [x] I am identifying subtle or complex issues in designs and implementation PRs
- [x] I am directly contributing to the subproject through implementation and / or review
- [x] I am aligning with the overall project goals, specifications, and design principles. I am bringing general questions and requests to the discussions as part of the specifications project.
- [x] I have spoken to 2 sponsors (maintainers) ahead of this application, and they have agreed to sponsor my application
- [x] I have filed a PR to add myself as [project maintainer](https://github.com/keptn/keptn/blob/master/MAINTAINERS) and referenced this issue in the PR
### Sponsors
<!-- Replace (at) with the `@` sign -->
- @grabnerandi
- @AloisReitbauer
- @johannes-b
- @thisthat
- @christian-kreuzberger-dtx
Each sponsor should reply to this issue with the comment "*I support*".
Please remember, it is an applicant's responsibility to get their sponsors' confirmation before submitting the request.
### List of contributions to the Keptn project
- PRs reviewed / authored
- Filed pull requests: https://github.com/search?q=org%3Akeptn+org%3Akeptn-sandbox+org%3Akeptn-integrations+is%3Apr+author%3Aoleg-nenashev&type=issues . Key authored ones:
- https://github.com/keptn/keptn/pull/6405
- https://github.com/keptn/keptn.github.io/pull/989
- https://github.com/keptn/tutorials/pull/209
- Reviewed PRs: https://github.com/search?q=org%3Akeptn+org%3Akeptn-sandbox+org%3Akeptn-integrations+is%3Apr+reviewed-by%3Aoleg-nenashev&type=issues
- Issues responded to
- Keptn RFEs relevant to the community feedback and requests
- Keptn community roadmap on https://github.com/keptn/community/issues
- User support in the `#help` channel
- SIG projects I am involved with
- CDF: Events SIG (Keptn Events is a part of it), Interoperability SIG
- Many Jenkins SIGs | main | request maintainer status for oleg nenashev i would like to request the maintainer status in keptn i have been an active contributor since july and currently i am helping with community management and keptn growth i do not contribute much to the keptn production codebase but i focus on other areas public roadmap issue backlog documentation and tutorials keptn adoption contributor guidelines community marketing etc all of that is an important part of the keptn community my main use cases for the maintainer status are github org and repository administration so that i can help other maintainers with processing requests and improving the community access to the cncf support portal o that i can handle cncf interaction on my own for the ecosystem e g community cncf io maintainer list lfx mentorship lfx insights slack core infrastructure initiative etc representing the keptn project officially in the cncf and other communities e g for keptn incubation or technical partnerships github username oleg nenashev membership level maintainer requirements for member i have reviewed the community membership guidelines i have enabled on my github account see i have subscribed to the i am actively contributing to or more keptn subprojects i have two sponsors that meet the sponsor requirements listed in the community membership guidelines among other requirements sponsors must be approvers or maintainers of at least one repository in the organization and not both affiliated with the same company i have spoken to sponsors approvers or maintainers ahead of this application and they have agreed to sponsor my application i have filed a pr to add myself as and referenced this issue in the pr requirements for approver if applicable i am a reviewer of the codebase for at least month i am a reviewer for or author of substantial prs to the codebase with the definition of substantial subject to the maintainer s discretion e g refactors adds new functionality rather than one line changes i have spoken to sponsors maintainers ahead of this application and they have agreed to sponsor my application i have filed a pr to add myself as and referenced this issue in the pr requirements for maintainer if applicable i have deep understanding of the technical goals and direction of the subproject i have deep understanding of the technical domain specifically the language of the subproject i did sustained contributions to design and direction by doing all of i am authoring and reviewing proposals i am initiating contributing and resolving discussions e mails github issues meetings i am identifying subtle or complex issues in designs and implementation prs i am directly contributing to the subproject through implementation and or review i am aligning with the overall project goals specifications and design principles i am bringing general questions and requests to the discussions as part of the specifications project i have spoken to sponsors maintainers ahead of this application and they have agreed to sponsor my application i have filed a pr to add myself as and referenced this issue in the pr sponsors grabnerandi aloisreitbauer johannes b thisthat christian kreuzberger dtx each sponsor should reply to this issue with the comment i support please remember it is an applicant s responsibility to get their sponsors confirmation before submitting the request list of contributions to the keptn project prs reviewed authored filed pull requests key authored ones reviewed prs issues responded to keptn rfes relevant to the community feedback and requests keptn community roadmap on user support in the help channel sig projects i am involved with cdf events sig keptn events is a part of it interoperability sig many jenkins sigs | 1 |
2,272 | 8,038,152,513 | IssuesEvent | 2018-07-30 14:40:11 | citrusframework/citrus | https://api.github.com/repos/citrusframework/citrus | closed | Webservice Basic Auth not working | Prio: Medium TO REVIEW Type: Maintainance | I have tried to setup the basic authentification for the webservice client. But it doesn't seems to work
```
<bean id="basicAuthClient" class="org.springframework.ws.transport.http.HttpComponentsMessageSender">
<property name="authScope">
<bean class="org.apache.http.auth.AuthScope">
<constructor-arg value="localhost"/>
<constructor-arg value="6666"/>
<constructor-arg value=""/>
<constructor-arg value="basic"/>
</bean>
</property>
<property name="credentials">
<bean class="org.apache.http.auth.UsernamePasswordCredentials">
<constructor-arg value="username"/>
<constructor-arg value="password"/>
</bean>
</property>
</bean>
<citrus-ws:client id="webServiceClient"
timeout="1000"
message-sender="basicAuthClient"
request-url="http://localhost:6666/endpoint"/>
```
Gives me a 401.
The request log on the server do not show the username so it isn't send.
If I try it like this:
```
<send>
...
<header><element name="citrus_http_Authorization" value="citrus:concat('Basic ', citrus:encodeBase64('username:password'))"/></header>
```
it's working | True | Webservice Basic Auth not working - I have tried to setup the basic authentification for the webservice client. But it doesn't seems to work
```
<bean id="basicAuthClient" class="org.springframework.ws.transport.http.HttpComponentsMessageSender">
<property name="authScope">
<bean class="org.apache.http.auth.AuthScope">
<constructor-arg value="localhost"/>
<constructor-arg value="6666"/>
<constructor-arg value=""/>
<constructor-arg value="basic"/>
</bean>
</property>
<property name="credentials">
<bean class="org.apache.http.auth.UsernamePasswordCredentials">
<constructor-arg value="username"/>
<constructor-arg value="password"/>
</bean>
</property>
</bean>
<citrus-ws:client id="webServiceClient"
timeout="1000"
message-sender="basicAuthClient"
request-url="http://localhost:6666/endpoint"/>
```
Gives me a 401.
The request log on the server do not show the username so it isn't send.
If I try it like this:
```
<send>
...
<header><element name="citrus_http_Authorization" value="citrus:concat('Basic ', citrus:encodeBase64('username:password'))"/></header>
```
it's working | main | webservice basic auth not working i have tried to setup the basic authentification for the webservice client but it doesn t seems to work citrus ws client id webserviceclient timeout message sender basicauthclient request url gives me a the request log on the server do not show the username so it isn t send if i try it like this it s working | 1 |
4,605 | 23,849,928,216 | IssuesEvent | 2022-09-06 16:56:32 | ocsf/ocsf-schema | https://api.github.com/repos/ocsf/ocsf-schema | closed | Decide on release version, update on last change before public release | question maintainers | Issue to track requirement for correct versioning for initial release. | True | Decide on release version, update on last change before public release - Issue to track requirement for correct versioning for initial release. | main | decide on release version update on last change before public release issue to track requirement for correct versioning for initial release | 1 |
148 | 2,669,186,477 | IssuesEvent | 2015-03-23 14:15:17 | TheRosettaFoundation/SOLAS-Match | https://api.github.com/repos/TheRosettaFoundation/SOLAS-Match | closed | Implement a scalable mechanism to easily localise static content found in the database - e.g. System Badges | feature-idea Localisation maintainability | A scalable mechanism needs to be implemented to facilitate the localisation of some static strings found in the database such as System badges and their descriptions.
The proposed idea is to develop a function such as getTranslationForDBString($string) which will do the following operations:
1) get the string
2) hash it (e.g. using sha1)
3) search the strings.xml file of the target locale for a string where the name of the string ends with the above hash
4) if found a matching string, use that in place of the string returned by the db
5) if not found, use the original string (from the db)
e.g. getTranslationForDBString("SOLAS Match - Registered")
1) string = "SOLAS Match - Registered"
2) hash/encoded string = 9r9yhyc4uv40c8occog84s408ssk448
3) search in strings file of the current locale for an entry like
```
< string name="db_string_badges_registered_9r9yhyc4uv40c8occog84s408ssk448" > translation </string>
```
4) found a translation, so function returns the translation
The above mechanism is scalable to a certain degree and allows meaningful keys for the strings file entry. In this manner, the database needs not to be altered to have separate columns for each language and same strings.xml file can be used to localise some static content in the db. The disadvantage is that if an entry in the database is changed, the keys in the strings.xml file needs to be changed too.
However, the above mechanism is not practical for localising content such as the county list as collation/sorting order of entries needs to be taken into account when presenting these to the user. | True | Implement a scalable mechanism to easily localise static content found in the database - e.g. System Badges - A scalable mechanism needs to be implemented to facilitate the localisation of some static strings found in the database such as System badges and their descriptions.
The proposed idea is to develop a function such as getTranslationForDBString($string) which will do the following operations:
1) get the string
2) hash it (e.g. using sha1)
3) search the strings.xml file of the target locale for a string where the name of the string ends with the above hash
4) if found a matching string, use that in place of the string returned by the db
5) if not found, use the original string (from the db)
e.g. getTranslationForDBString("SOLAS Match - Registered")
1) string = "SOLAS Match - Registered"
2) hash/encoded string = 9r9yhyc4uv40c8occog84s408ssk448
3) search in strings file of the current locale for an entry like
```
< string name="db_string_badges_registered_9r9yhyc4uv40c8occog84s408ssk448" > translation </string>
```
4) found a translation, so function returns the translation
The above mechanism is scalable to a certain degree and allows meaningful keys for the strings file entry. In this manner, the database needs not to be altered to have separate columns for each language and same strings.xml file can be used to localise some static content in the db. The disadvantage is that if an entry in the database is changed, the keys in the strings.xml file needs to be changed too.
However, the above mechanism is not practical for localising content such as the county list as collation/sorting order of entries needs to be taken into account when presenting these to the user. | main | implement a scalable mechanism to easily localise static content found in the database e g system badges a scalable mechanism needs to be implemented to facilitate the localisation of some static strings found in the database such as system badges and their descriptions the proposed idea is to develop a function such as gettranslationfordbstring string which will do the following operations get the string hash it e g using search the strings xml file of the target locale for a string where the name of the string ends with the above hash if found a matching string use that in place of the string returned by the db if not found use the original string from the db e g gettranslationfordbstring solas match registered string solas match registered hash encoded string search in strings file of the current locale for an entry like translation found a translation so function returns the translation the above mechanism is scalable to a certain degree and allows meaningful keys for the strings file entry in this manner the database needs not to be altered to have separate columns for each language and same strings xml file can be used to localise some static content in the db the disadvantage is that if an entry in the database is changed the keys in the strings xml file needs to be changed too however the above mechanism is not practical for localising content such as the county list as collation sorting order of entries needs to be taken into account when presenting these to the user | 1 |
287,963 | 31,856,849,457 | IssuesEvent | 2023-09-15 08:06:29 | nidhi7598/linux-4.19.72_CVE-2022-3564 | https://api.github.com/repos/nidhi7598/linux-4.19.72_CVE-2022-3564 | closed | CVE-2019-17666 (High) detected in linuxlinux-4.19.294 - autoclosed | Mend: dependency security vulnerability | ## CVE-2019-17666 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linuxlinux-4.19.294</b></p></summary>
<p>
<p>The Linux Kernel</p>
<p>Library home page: <a href=https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux>https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux</a></p>
<p>Found in HEAD commit: <a href="https://github.com/nidhi7598/linux-4.19.72_CVE-2022-3564/commit/454c7dacf6fa9a6de86d4067f5a08f25cffa519b">454c7dacf6fa9a6de86d4067f5a08f25cffa519b</a></p>
<p>Found in base branch: <b>main</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (2)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/drivers/net/wireless/realtek/rtlwifi/ps.c</b>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/drivers/net/wireless/realtek/rtlwifi/ps.c</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png?' width=19 height=20> Vulnerability Details</summary>
<p>
rtl_p2p_noa_ie in drivers/net/wireless/realtek/rtlwifi/ps.c in the Linux kernel through 5.3.6 lacks a certain upper-bound check, leading to a buffer overflow.
<p>Publish Date: 2019-10-17
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2019-17666>CVE-2019-17666</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Adjacent
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2019-17666">https://nvd.nist.gov/vuln/detail/CVE-2019-17666</a></p>
<p>Release Date: 2019-10-24</p>
<p>Fix Resolution: linux - 5.3.9.1-1;linux-lts - 4.19.82-1;linux-zen - 5.3.9.1-1;linux-hardened - 5.3.7.b-1</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2019-17666 (High) detected in linuxlinux-4.19.294 - autoclosed - ## CVE-2019-17666 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linuxlinux-4.19.294</b></p></summary>
<p>
<p>The Linux Kernel</p>
<p>Library home page: <a href=https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux>https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux</a></p>
<p>Found in HEAD commit: <a href="https://github.com/nidhi7598/linux-4.19.72_CVE-2022-3564/commit/454c7dacf6fa9a6de86d4067f5a08f25cffa519b">454c7dacf6fa9a6de86d4067f5a08f25cffa519b</a></p>
<p>Found in base branch: <b>main</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (2)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/drivers/net/wireless/realtek/rtlwifi/ps.c</b>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/drivers/net/wireless/realtek/rtlwifi/ps.c</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png?' width=19 height=20> Vulnerability Details</summary>
<p>
rtl_p2p_noa_ie in drivers/net/wireless/realtek/rtlwifi/ps.c in the Linux kernel through 5.3.6 lacks a certain upper-bound check, leading to a buffer overflow.
<p>Publish Date: 2019-10-17
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2019-17666>CVE-2019-17666</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Adjacent
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2019-17666">https://nvd.nist.gov/vuln/detail/CVE-2019-17666</a></p>
<p>Release Date: 2019-10-24</p>
<p>Fix Resolution: linux - 5.3.9.1-1;linux-lts - 4.19.82-1;linux-zen - 5.3.9.1-1;linux-hardened - 5.3.7.b-1</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_main | cve high detected in linuxlinux autoclosed cve high severity vulnerability vulnerable library linuxlinux the linux kernel library home page a href found in head commit a href found in base branch main vulnerable source files drivers net wireless realtek rtlwifi ps c drivers net wireless realtek rtlwifi ps c vulnerability details rtl noa ie in drivers net wireless realtek rtlwifi ps c in the linux kernel through lacks a certain upper bound check leading to a buffer overflow publish date url a href cvss score details base score metrics exploitability metrics attack vector adjacent attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution linux linux lts linux zen linux hardened b step up your open source security game with mend | 0 |
1,346 | 5,728,910,513 | IssuesEvent | 2017-04-21 03:19:10 | tomchentw/react-google-maps | https://api.github.com/repos/tomchentw/react-google-maps | closed | Prep for React v16 and removal for deprecation warnings (PropTypes & createClass) | CALL_FOR_MAINTAINERS | Are there any plans to prepare for React v16 as per this post: [React v15.5.0](https://facebook.github.io/react/blog/2017/04/07/react-v15.5.0.html) ..?
It means replacing all `React.PropTypes` and using the [prop-types](https://github.com/reactjs/prop-types) package, and repalcing all `React.createClass` using the [create-react-class](https://www.npmjs.com/package/create-react-class) package.
Doing this will also remove the deprecation warnings for anyone using React v15.5.0 which show in the console in development env.
| True | Prep for React v16 and removal for deprecation warnings (PropTypes & createClass) - Are there any plans to prepare for React v16 as per this post: [React v15.5.0](https://facebook.github.io/react/blog/2017/04/07/react-v15.5.0.html) ..?
It means replacing all `React.PropTypes` and using the [prop-types](https://github.com/reactjs/prop-types) package, and repalcing all `React.createClass` using the [create-react-class](https://www.npmjs.com/package/create-react-class) package.
Doing this will also remove the deprecation warnings for anyone using React v15.5.0 which show in the console in development env.
| main | prep for react and removal for deprecation warnings proptypes createclass are there any plans to prepare for react as per this post it means replacing all react proptypes and using the package and repalcing all react createclass using the package doing this will also remove the deprecation warnings for anyone using react which show in the console in development env | 1 |
811,925 | 30,306,221,175 | IssuesEvent | 2023-07-10 09:39:15 | jpmorganchase/salt-ds | https://api.github.com/repos/jpmorganchase/salt-ds | closed | Salt ag grid theme doesn't style filter panel | type: bug 🪲 community priority: medium 😠 | ### Package name(s)
AG Grid Theme (@salt-ds/ag-grid-theme)
### Package version(s)
"@salt-ds/ag-grid-theme": "1.1.6"
### Description
Out of box ag grid filter panel doesn't match overall styling
- Spacing within the panel are tighter than before
- Buttons looks default HTML buttons, instead of Salt ones
Raised by MTK
### Steps to reproduce
Use below code in one of column def, then open the filter panel using triple dots menu on hovering header
filter: 'agTextColumnFilter',
filterParams: {
buttons: ['reset', 'apply'],
},
https://stackblitz.com/edit/salt-ag-grid-theme-g6m6gj?file=package.json,App.jsx
### Expected behavior
Spacing should match general Salt design language, button should look like Salt ones
### Operating system
- [X] macOS
- [ ] Windows
- [ ] Linux
- [ ] iOS
- [ ] Android
### Browser
- [X] Chrome
- [ ] Safari
- [ ] Firefox
- [ ] Edge
### Are you a JPMorgan Chase & Co. employee?
- [X] I am an employee of JPMorgan Chase & Co. | 1.0 | Salt ag grid theme doesn't style filter panel - ### Package name(s)
AG Grid Theme (@salt-ds/ag-grid-theme)
### Package version(s)
"@salt-ds/ag-grid-theme": "1.1.6"
### Description
Out of box ag grid filter panel doesn't match overall styling
- Spacing within the panel are tighter than before
- Buttons looks default HTML buttons, instead of Salt ones
Raised by MTK
### Steps to reproduce
Use below code in one of column def, then open the filter panel using triple dots menu on hovering header
filter: 'agTextColumnFilter',
filterParams: {
buttons: ['reset', 'apply'],
},
https://stackblitz.com/edit/salt-ag-grid-theme-g6m6gj?file=package.json,App.jsx
### Expected behavior
Spacing should match general Salt design language, button should look like Salt ones
### Operating system
- [X] macOS
- [ ] Windows
- [ ] Linux
- [ ] iOS
- [ ] Android
### Browser
- [X] Chrome
- [ ] Safari
- [ ] Firefox
- [ ] Edge
### Are you a JPMorgan Chase & Co. employee?
- [X] I am an employee of JPMorgan Chase & Co. | non_main | salt ag grid theme doesn t style filter panel package name s ag grid theme salt ds ag grid theme package version s salt ds ag grid theme description out of box ag grid filter panel doesn t match overall styling spacing within the panel are tighter than before buttons looks default html buttons instead of salt ones raised by mtk steps to reproduce use below code in one of column def then open the filter panel using triple dots menu on hovering header filter agtextcolumnfilter filterparams buttons expected behavior spacing should match general salt design language button should look like salt ones operating system macos windows linux ios android browser chrome safari firefox edge are you a jpmorgan chase co employee i am an employee of jpmorgan chase co | 0 |
70,525 | 3,331,762,675 | IssuesEvent | 2015-11-11 17:08:18 | YetiForceCompany/YetiForceCRM | https://api.github.com/repos/YetiForceCompany/YetiForceCRM | closed | [Improvement] Documents module | Label::Core Priority::#2 Normal Type::Question | In checking out Documents functionality...it would be nice to be able to view the document online without having to download it. (whether with PDF viewer or Word viewer.)
Since the doc in question was a word doc, I put the text into the note field to see it online. worked fine, but then realized it would be nice to be able to print it from there. just some thoughts. | 1.0 | [Improvement] Documents module - In checking out Documents functionality...it would be nice to be able to view the document online without having to download it. (whether with PDF viewer or Word viewer.)
Since the doc in question was a word doc, I put the text into the note field to see it online. worked fine, but then realized it would be nice to be able to print it from there. just some thoughts. | non_main | documents module in checking out documents functionality it would be nice to be able to view the document online without having to download it whether with pdf viewer or word viewer since the doc in question was a word doc i put the text into the note field to see it online worked fine but then realized it would be nice to be able to print it from there just some thoughts | 0 |
5,326 | 26,899,258,812 | IssuesEvent | 2023-02-06 14:36:02 | BioArchLinux/Packages | https://api.github.com/repos/BioArchLinux/Packages | closed | [MAINTAIN] ugene maintain | maintain | <!--
Please report the error of one package in one issue! Use multi issues to report multi bugs.
Thanks!
-->
**Log of the bug**
**Packages (please complete the following information):**
- Package Name: ugene
**Description**
Could @hubutui maintain ugene at this repo
| True | [MAINTAIN] ugene maintain - <!--
Please report the error of one package in one issue! Use multi issues to report multi bugs.
Thanks!
-->
**Log of the bug**
**Packages (please complete the following information):**
- Package Name: ugene
**Description**
Could @hubutui maintain ugene at this repo
| main | ugene maintain please report the error of one package in one issue use multi issues to report multi bugs thanks log of the bug packages please complete the following information package name ugene description could hubutui maintain ugene at this repo | 1 |
128,110 | 10,515,875,924 | IssuesEvent | 2019-09-28 13:25:12 | aryoda/tryCatchLog | https://api.github.com/repos/aryoda/tryCatchLog | opened | Investigate failing unit test on debian dev gcc (CRAN version) | Testing | > Dear maintainer,
>
> Please see the problems shown on
> <https://cran.r-project.org/web/checks/check_results_tryCatchLog.html>.
>
> Please correct before 2019-10-12 to safely retain your package on CRAN. | 1.0 | Investigate failing unit test on debian dev gcc (CRAN version) - > Dear maintainer,
>
> Please see the problems shown on
> <https://cran.r-project.org/web/checks/check_results_tryCatchLog.html>.
>
> Please correct before 2019-10-12 to safely retain your package on CRAN. | non_main | investigate failing unit test on debian dev gcc cran version dear maintainer please see the problems shown on please correct before to safely retain your package on cran | 0 |
121,171 | 4,805,759,198 | IssuesEvent | 2016-11-02 16:48:05 | quaker-social-action/dte-website | https://api.github.com/repos/quaker-social-action/dte-website | closed | Routing and toggling classes | help wanted Priority-2 t - 1day | Simply adding a class that sets display:none to the page we want to hide when another route is set doesn't actually hide it since fullpage.js automatically sets the page to the full height of the window. We need to either unmount the anchor somehow or figure out another way to do this... | 1.0 | Routing and toggling classes - Simply adding a class that sets display:none to the page we want to hide when another route is set doesn't actually hide it since fullpage.js automatically sets the page to the full height of the window. We need to either unmount the anchor somehow or figure out another way to do this... | non_main | routing and toggling classes simply adding a class that sets display none to the page we want to hide when another route is set doesn t actually hide it since fullpage js automatically sets the page to the full height of the window we need to either unmount the anchor somehow or figure out another way to do this | 0 |
5,540 | 27,737,591,685 | IssuesEvent | 2023-03-15 12:19:41 | centerofci/mathesar | https://api.github.com/repos/centerofci/mathesar | closed | Date picker off screen when viewport is short | type: bug work: frontend status: ready restricted: maintainers | ## Problem
The date picker UI is pretty tall. Will tall viewports, the date picker works fine because when it opens from cells near the bottom of the viewport, it opens up -- and when it opens from cells near the top of the viewport, it opens down. But if the viewport is shorter than roughly twice the height of the date picker, then we run unto problems. When a cell near the center of the viewport opens the date picker, the date picker can't open up or down without getting cut off.

I can imagine various approaches to dealing with this problem. Maybe we open it to the side. Maybe we open it in a modal. We should figure something out, which may involve experimenting a bit with our available options.
| True | Date picker off screen when viewport is short - ## Problem
The date picker UI is pretty tall. Will tall viewports, the date picker works fine because when it opens from cells near the bottom of the viewport, it opens up -- and when it opens from cells near the top of the viewport, it opens down. But if the viewport is shorter than roughly twice the height of the date picker, then we run unto problems. When a cell near the center of the viewport opens the date picker, the date picker can't open up or down without getting cut off.

I can imagine various approaches to dealing with this problem. Maybe we open it to the side. Maybe we open it in a modal. We should figure something out, which may involve experimenting a bit with our available options.
| main | date picker off screen when viewport is short problem the date picker ui is pretty tall will tall viewports the date picker works fine because when it opens from cells near the bottom of the viewport it opens up and when it opens from cells near the top of the viewport it opens down but if the viewport is shorter than roughly twice the height of the date picker then we run unto problems when a cell near the center of the viewport opens the date picker the date picker can t open up or down without getting cut off i can imagine various approaches to dealing with this problem maybe we open it to the side maybe we open it in a modal we should figure something out which may involve experimenting a bit with our available options | 1 |
1,421 | 6,188,627,854 | IssuesEvent | 2017-07-04 10:40:14 | ocaml/opam-repository | https://api.github.com/repos/ocaml/opam-repository | opened | Fix logtk.0.8.1 constraints | incorrect constraints needs maintainer action | Ping @c-cube the maintainer of logtk. The package's constraints are completely wrong which shows up as false positives in the CI revdep testing.
There have been a few unsuccessful attempts at fixing this in https://github.com/ocaml/opam-repository/pull/9731 and https://github.com/ocaml/opam-repository/pull/9322. But it seems doing this properly needs a bit more knowledge about the software and its actual dependencies.
Could you please fix this ? Consult the CI logs of the mentioned PRs for actual errors.
| True | Fix logtk.0.8.1 constraints - Ping @c-cube the maintainer of logtk. The package's constraints are completely wrong which shows up as false positives in the CI revdep testing.
There have been a few unsuccessful attempts at fixing this in https://github.com/ocaml/opam-repository/pull/9731 and https://github.com/ocaml/opam-repository/pull/9322. But it seems doing this properly needs a bit more knowledge about the software and its actual dependencies.
Could you please fix this ? Consult the CI logs of the mentioned PRs for actual errors.
| main | fix logtk constraints ping c cube the maintainer of logtk the package s constraints are completely wrong which shows up as false positives in the ci revdep testing there have been a few unsuccessful attempts at fixing this in and but it seems doing this properly needs a bit more knowledge about the software and its actual dependencies could you please fix this consult the ci logs of the mentioned prs for actual errors | 1 |
1,433 | 6,221,362,523 | IssuesEvent | 2017-07-10 05:17:04 | MDAnalysis/mdanalysis | https://api.github.com/repos/MDAnalysis/mdanalysis | closed | remove `vm` from repository | installation maintainability | The repository contains templates for a number of virtual machines in the `vm` directory.
Although I liked them at the time, they are not being maintained at the moment so we are probably better off just removing them. When they are removed we also have to adjust the tutorial http://www.mdanalysis.org/MDAnalysisTutorial/installation.html#virtual-machine
If anyone has any concerns or desperately wants to keep them say so. | True | remove `vm` from repository - The repository contains templates for a number of virtual machines in the `vm` directory.
Although I liked them at the time, they are not being maintained at the moment so we are probably better off just removing them. When they are removed we also have to adjust the tutorial http://www.mdanalysis.org/MDAnalysisTutorial/installation.html#virtual-machine
If anyone has any concerns or desperately wants to keep them say so. | main | remove vm from repository the repository contains templates for a number of virtual machines in the vm directory although i liked them at the time they are not being maintained at the moment so we are probably better off just removing them when they are removed we also have to adjust the tutorial if anyone has any concerns or desperately wants to keep them say so | 1 |
283,490 | 21,316,649,251 | IssuesEvent | 2022-04-16 11:54:02 | l-shihao/pe | https://api.github.com/repos/l-shihao/pe | opened | User Stories in DG format error | severity.VeryLow type.DocumentationBug | Initially I thought no `email` mentioned in the User Stories table, turns out this could some offsets or extra bars `|` in the markdown?

Table should only has 4 columns
<!--session: 1650102990369-ab02ea20-3e4d-402f-a563-6608270bedb1-->
<!--Version: Web v3.4.2--> | 1.0 | User Stories in DG format error - Initially I thought no `email` mentioned in the User Stories table, turns out this could some offsets or extra bars `|` in the markdown?

Table should only has 4 columns
<!--session: 1650102990369-ab02ea20-3e4d-402f-a563-6608270bedb1-->
<!--Version: Web v3.4.2--> | non_main | user stories in dg format error initially i thought no email mentioned in the user stories table turns out this could some offsets or extra bars in the markdown table should only has columns | 0 |
1,885 | 6,577,521,839 | IssuesEvent | 2017-09-12 01:29:57 | ansible/ansible-modules-core | https://api.github.com/repos/ansible/ansible-modules-core | closed | ec2_vol unable to create volumes of specific size from snapshots | affects_2.1 aws bug_report cloud waiting_on_maintainer | ##### Issue Type:
Bug Report
##### Plugin name
ec2_vol
##### Ansible Version:
2.1.0 from git (cd51ba7965325fd5e7857e4cf2c3725b81b39352)
##### Ansible Configuration:
Default
##### Environment:
Using Ubuntu 15.10 in AWS but this is not affected by OS.
##### Summary:
Some items copied from another user that created [this](https://github.com/ansible/ansible/issues/14007) issue which got closed since it was in the wrong repo but it's pretty much identical:
The changes made to lines 442/443 in this pull request (https://github.com/ansible/ansible-modules-core/pull/1747/files) break the logic, making it so that the user can no longer create a volume of a specific size out of a snapshot.
Lines from pull request:
ORIGINAL: `if volume_size and id:`
NEW BROKEN: `if volume_size and (id or snapshot):`
##### Steps To Reproduce:
Test Playbook:
```
- name: Create volume and attach to this instance
register: new_volume
ec2_vol:
state: present
region: XXXXX
instance: XXXXX
volume_size: 30
snapshot: snap-XXXX
device_name: /dev/sdX
```
Comment out the volume_size and you get a volume the size of the snapshot (1GB) instead of the 30 GB but with that flag there, you get errors about wrong parameters.
##### Expected results:
Successfully creates the 30 GB volume out of the 1GB snapshot
##### Actual Results:
```
fatal: [localhost]: FAILED! => {"changed": false, "failed": true, "msg": "Cannot specify volume_size together with id or snapshot"}
```
| True | ec2_vol unable to create volumes of specific size from snapshots - ##### Issue Type:
Bug Report
##### Plugin name
ec2_vol
##### Ansible Version:
2.1.0 from git (cd51ba7965325fd5e7857e4cf2c3725b81b39352)
##### Ansible Configuration:
Default
##### Environment:
Using Ubuntu 15.10 in AWS but this is not affected by OS.
##### Summary:
Some items copied from another user that created [this](https://github.com/ansible/ansible/issues/14007) issue which got closed since it was in the wrong repo but it's pretty much identical:
The changes made to lines 442/443 in this pull request (https://github.com/ansible/ansible-modules-core/pull/1747/files) break the logic, making it so that the user can no longer create a volume of a specific size out of a snapshot.
Lines from pull request:
ORIGINAL: `if volume_size and id:`
NEW BROKEN: `if volume_size and (id or snapshot):`
##### Steps To Reproduce:
Test Playbook:
```
- name: Create volume and attach to this instance
register: new_volume
ec2_vol:
state: present
region: XXXXX
instance: XXXXX
volume_size: 30
snapshot: snap-XXXX
device_name: /dev/sdX
```
Comment out the volume_size and you get a volume the size of the snapshot (1GB) instead of the 30 GB but with that flag there, you get errors about wrong parameters.
##### Expected results:
Successfully creates the 30 GB volume out of the 1GB snapshot
##### Actual Results:
```
fatal: [localhost]: FAILED! => {"changed": false, "failed": true, "msg": "Cannot specify volume_size together with id or snapshot"}
```
| main | vol unable to create volumes of specific size from snapshots issue type bug report plugin name vol ansible version from git ansible configuration default environment using ubuntu in aws but this is not affected by os summary some items copied from another user that created issue which got closed since it was in the wrong repo but it s pretty much identical the changes made to lines in this pull request break the logic making it so that the user can no longer create a volume of a specific size out of a snapshot lines from pull request original if volume size and id new broken if volume size and id or snapshot steps to reproduce test playbook name create volume and attach to this instance register new volume vol state present region xxxxx instance xxxxx volume size snapshot snap xxxx device name dev sdx comment out the volume size and you get a volume the size of the snapshot instead of the gb but with that flag there you get errors about wrong parameters expected results successfully creates the gb volume out of the snapshot actual results fatal failed changed false failed true msg cannot specify volume size together with id or snapshot | 1 |
2,054 | 6,967,764,165 | IssuesEvent | 2017-12-10 13:23:33 | ocaml/opam-repository | https://api.github.com/repos/ocaml/opam-repository | closed | conf-gmp-powm-sec failing to build on mac | needs maintainer action | Attempting to install cryptokit (OCaml 4.06.0) on my macOS 10.13.1 system leads to conf-gmp-powm-sec trying to install. It fails with:
+ cc -c -I/usr/local/include test.c
test.c:1:10: fatal error: 'gmp.h' file not found
#include <gmp.h>
^~~~~~~
1 error generated.
Any ideas?
| True | conf-gmp-powm-sec failing to build on mac - Attempting to install cryptokit (OCaml 4.06.0) on my macOS 10.13.1 system leads to conf-gmp-powm-sec trying to install. It fails with:
+ cc -c -I/usr/local/include test.c
test.c:1:10: fatal error: 'gmp.h' file not found
#include <gmp.h>
^~~~~~~
1 error generated.
Any ideas?
| main | conf gmp powm sec failing to build on mac attempting to install cryptokit ocaml on my macos system leads to conf gmp powm sec trying to install it fails with cc c i usr local include test c test c fatal error gmp h file not found include error generated any ideas | 1 |
5,543 | 27,747,834,743 | IssuesEvent | 2023-03-15 18:17:46 | centerofci/mathesar | https://api.github.com/repos/centerofci/mathesar | closed | Auth problem with the records endpoint after session expiry | type: bug work: backend status: ready restricted: maintainers | After my session on the staging site expired, I pressed the "Refresh" UI button within the Table Page.
- The columns and constraints endpoints responded appropriately with an HTTP 403 and the following body:
```json
[
{
"code": 4003,
"message": "Authentication credentials were not provided.",
"details": { "exception": "Authentication credentials were not provided." }
}
]
```
- But the records endpoint gave me an HTTP 500 and the following Django error
<details>
<summary>Traceback</summary>
```
Environment:
Request Method: GET
Request URL: https://staging.mathesar.org/api/db/v0/tables/12454/records/?limit=500&offset=0&order_by=%5B%7B%22field%22%3A79900%2C%22direction%22%3A%22asc%22%7D%5D&filter=%7B%22and%22%3A%5B%7B%22equal%22%3A%5B%7B%22column_id%22%3A%5B62229%5D%7D%2C%7B%22literal%22%3A%5B8%5D%7D%5D%7D%2C%7B%22lesser%22%3A%5B%7B%22column_id%22%3A%5B62697%5D%7D%2C%7B%22literal%22%3A%5B%22100%22%5D%7D%5D%7D%5D%7D
Django Version: 3.1.14
Python Version: 3.9.2
Installed Applications:
['django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
'rest_framework',
'django_filters',
'django_property_filter',
'mathesar']
Installed Middleware:
['django.middleware.security.SecurityMiddleware',
'django.contrib.sessions.middleware.SessionMiddleware',
'django.middleware.common.CommonMiddleware',
'django.middleware.csrf.CsrfViewMiddleware',
'django.contrib.auth.middleware.AuthenticationMiddleware',
'django.contrib.messages.middleware.MessageMiddleware',
'django.middleware.clickjacking.XFrameOptionsMiddleware',
'mathesar.middleware.CursorClosedHandlerMiddleware',
'mathesar.middleware.PasswordChangeNeededMiddleware',
'django_userforeignkey.middleware.UserForeignKeyMiddleware',
'django_request_cache.middleware.RequestCacheMiddleware']
Traceback (most recent call last):
File "/opt/virtualenvs/staging-mathesar/lib/python3.9/site-packages/django/db/models/fields/__init__.py", line 1774, in get_prep_value
return int(value)
File "/opt/virtualenvs/staging-mathesar/lib/python3.9/site-packages/django/contrib/auth/models.py", line 414, in __int__
raise TypeError('Cannot cast AnonymousUser to int. Are you trying to use it in place of User?')
The above exception (Cannot cast AnonymousUser to int. Are you trying to use it in place of User?) was the direct cause of the following exception:
File "/opt/virtualenvs/staging-mathesar/lib/python3.9/site-packages/django/core/handlers/exception.py", line 47, in inner
response = get_response(request)
File "/opt/virtualenvs/staging-mathesar/lib/python3.9/site-packages/django/core/handlers/base.py", line 181, in _get_response
response = wrapped_callback(request, *callback_args, **callback_kwargs)
File "/opt/virtualenvs/staging-mathesar/lib/python3.9/site-packages/django/views/decorators/csrf.py", line 54, in wrapped_view
return view_func(*args, **kwargs)
File "/opt/virtualenvs/staging-mathesar/lib/python3.9/site-packages/rest_framework/viewsets.py", line 125, in view
return self.dispatch(request, *args, **kwargs)
File "/opt/virtualenvs/staging-mathesar/lib/python3.9/site-packages/rest_framework/views.py", line 509, in dispatch
response = self.handle_exception(exc)
File "/opt/virtualenvs/staging-mathesar/lib/python3.9/site-packages/rest_framework/views.py", line 466, in handle_exception
response = exception_handler(exc, context)
File "/var/www/staging.mathesar.org/mathesar/mathesar/exception_handlers.py", line 59, in mathesar_exception_handler
raise exc
File "/opt/virtualenvs/staging-mathesar/lib/python3.9/site-packages/rest_framework/views.py", line 497, in dispatch
self.initial(request, *args, **kwargs)
File "/opt/virtualenvs/staging-mathesar/lib/python3.9/site-packages/rest_framework/views.py", line 415, in initial
self.check_permissions(request)
File "/opt/virtualenvs/staging-mathesar/lib/python3.9/site-packages/rest_framework/views.py", line 332, in check_permissions
if not permission.has_permission(request, self):
File "/opt/virtualenvs/staging-mathesar/lib/python3.9/site-packages/rest_access_policy/access_policy.py", line 53, in has_permission
allowed = self._evaluate_statements(statements, request, view, action)
File "/opt/virtualenvs/staging-mathesar/lib/python3.9/site-packages/rest_access_policy/access_policy.py", line 101, in _evaluate_statements
matched = self._get_statements_matching_conditions(
File "/opt/virtualenvs/staging-mathesar/lib/python3.9/site-packages/rest_access_policy/access_policy.py", line 235, in _get_statements_matching_conditions
passed = bool(boolExpr.parseString(condition)[0])
File "/opt/virtualenvs/staging-mathesar/lib/python3.9/site-packages/rest_access_policy/parsing.py", line 32, in __bool__
return self.evalop(bool(a) for a in self.args)
File "/opt/virtualenvs/staging-mathesar/lib/python3.9/site-packages/rest_access_policy/parsing.py", line 32, in <genexpr>
return self.evalop(bool(a) for a in self.args)
File "/opt/virtualenvs/staging-mathesar/lib/python3.9/site-packages/rest_access_policy/parsing.py", line 14, in __bool__
return self.check_condition_fn(self.label)
File "/opt/virtualenvs/staging-mathesar/lib/python3.9/site-packages/rest_access_policy/access_policy.py", line 218, in <lambda>
check_cond_fn = lambda cond: self._check_condition(
File "/opt/virtualenvs/staging-mathesar/lib/python3.9/site-packages/rest_access_policy/access_policy.py", line 263, in _check_condition
result = method(request, view, action)
File "/var/www/staging.mathesar.org/mathesar/mathesar/api/db/permissions/records.py", line 37, in is_table_viewer
is_schema_viewer = SchemaRole.objects.filter(
File "/opt/virtualenvs/staging-mathesar/lib/python3.9/site-packages/django/db/models/manager.py", line 85, in manager_method
return getattr(self.get_queryset(), name)(*args, **kwargs)
File "/opt/virtualenvs/staging-mathesar/lib/python3.9/site-packages/django/db/models/query.py", line 942, in filter
return self._filter_or_exclude(False, *args, **kwargs)
File "/opt/virtualenvs/staging-mathesar/lib/python3.9/site-packages/django/db/models/query.py", line 962, in _filter_or_exclude
clone._filter_or_exclude_inplace(negate, *args, **kwargs)
File "/opt/virtualenvs/staging-mathesar/lib/python3.9/site-packages/django/db/models/query.py", line 969, in _filter_or_exclude_inplace
self._query.add_q(Q(*args, **kwargs))
File "/opt/virtualenvs/staging-mathesar/lib/python3.9/site-packages/django/db/models/sql/query.py", line 1360, in add_q
clause, _ = self._add_q(q_object, self.used_aliases)
File "/opt/virtualenvs/staging-mathesar/lib/python3.9/site-packages/django/db/models/sql/query.py", line 1379, in _add_q
child_clause, needed_inner = self.build_filter(
File "/opt/virtualenvs/staging-mathesar/lib/python3.9/site-packages/django/db/models/sql/query.py", line 1321, in build_filter
condition = self.build_lookup(lookups, col, value)
File "/opt/virtualenvs/staging-mathesar/lib/python3.9/site-packages/django/db/models/sql/query.py", line 1167, in build_lookup
lookup = lookup_class(lhs, rhs)
File "/opt/virtualenvs/staging-mathesar/lib/python3.9/site-packages/django/db/models/lookups.py", line 24, in __init__
self.rhs = self.get_prep_lookup()
File "/opt/virtualenvs/staging-mathesar/lib/python3.9/site-packages/django/db/models/fields/related_lookups.py", line 117, in get_prep_lookup
self.rhs = target_field.get_prep_value(self.rhs)
File "/opt/virtualenvs/staging-mathesar/lib/python3.9/site-packages/django/db/models/fields/__init__.py", line 1776, in get_prep_value
raise e.__class__(
Exception Type: TypeError at /api/db/v0/tables/12454/records/
Exception Value: Field 'id' expected a number but got <django.contrib.auth.models.AnonymousUser object at 0x7fe07d769820>.
```
</details>
CC @mathemancer @kgodey
I'm putting this in the First Release milestone, but only because I think someone from the backend team should look at it and then decide how to better prioritize it.
| True | Auth problem with the records endpoint after session expiry - After my session on the staging site expired, I pressed the "Refresh" UI button within the Table Page.
- The columns and constraints endpoints responded appropriately with an HTTP 403 and the following body:
```json
[
{
"code": 4003,
"message": "Authentication credentials were not provided.",
"details": { "exception": "Authentication credentials were not provided." }
}
]
```
- But the records endpoint gave me an HTTP 500 and the following Django error
<details>
<summary>Traceback</summary>
```
Environment:
Request Method: GET
Request URL: https://staging.mathesar.org/api/db/v0/tables/12454/records/?limit=500&offset=0&order_by=%5B%7B%22field%22%3A79900%2C%22direction%22%3A%22asc%22%7D%5D&filter=%7B%22and%22%3A%5B%7B%22equal%22%3A%5B%7B%22column_id%22%3A%5B62229%5D%7D%2C%7B%22literal%22%3A%5B8%5D%7D%5D%7D%2C%7B%22lesser%22%3A%5B%7B%22column_id%22%3A%5B62697%5D%7D%2C%7B%22literal%22%3A%5B%22100%22%5D%7D%5D%7D%5D%7D
Django Version: 3.1.14
Python Version: 3.9.2
Installed Applications:
['django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
'rest_framework',
'django_filters',
'django_property_filter',
'mathesar']
Installed Middleware:
['django.middleware.security.SecurityMiddleware',
'django.contrib.sessions.middleware.SessionMiddleware',
'django.middleware.common.CommonMiddleware',
'django.middleware.csrf.CsrfViewMiddleware',
'django.contrib.auth.middleware.AuthenticationMiddleware',
'django.contrib.messages.middleware.MessageMiddleware',
'django.middleware.clickjacking.XFrameOptionsMiddleware',
'mathesar.middleware.CursorClosedHandlerMiddleware',
'mathesar.middleware.PasswordChangeNeededMiddleware',
'django_userforeignkey.middleware.UserForeignKeyMiddleware',
'django_request_cache.middleware.RequestCacheMiddleware']
Traceback (most recent call last):
File "/opt/virtualenvs/staging-mathesar/lib/python3.9/site-packages/django/db/models/fields/__init__.py", line 1774, in get_prep_value
return int(value)
File "/opt/virtualenvs/staging-mathesar/lib/python3.9/site-packages/django/contrib/auth/models.py", line 414, in __int__
raise TypeError('Cannot cast AnonymousUser to int. Are you trying to use it in place of User?')
The above exception (Cannot cast AnonymousUser to int. Are you trying to use it in place of User?) was the direct cause of the following exception:
File "/opt/virtualenvs/staging-mathesar/lib/python3.9/site-packages/django/core/handlers/exception.py", line 47, in inner
response = get_response(request)
File "/opt/virtualenvs/staging-mathesar/lib/python3.9/site-packages/django/core/handlers/base.py", line 181, in _get_response
response = wrapped_callback(request, *callback_args, **callback_kwargs)
File "/opt/virtualenvs/staging-mathesar/lib/python3.9/site-packages/django/views/decorators/csrf.py", line 54, in wrapped_view
return view_func(*args, **kwargs)
File "/opt/virtualenvs/staging-mathesar/lib/python3.9/site-packages/rest_framework/viewsets.py", line 125, in view
return self.dispatch(request, *args, **kwargs)
File "/opt/virtualenvs/staging-mathesar/lib/python3.9/site-packages/rest_framework/views.py", line 509, in dispatch
response = self.handle_exception(exc)
File "/opt/virtualenvs/staging-mathesar/lib/python3.9/site-packages/rest_framework/views.py", line 466, in handle_exception
response = exception_handler(exc, context)
File "/var/www/staging.mathesar.org/mathesar/mathesar/exception_handlers.py", line 59, in mathesar_exception_handler
raise exc
File "/opt/virtualenvs/staging-mathesar/lib/python3.9/site-packages/rest_framework/views.py", line 497, in dispatch
self.initial(request, *args, **kwargs)
File "/opt/virtualenvs/staging-mathesar/lib/python3.9/site-packages/rest_framework/views.py", line 415, in initial
self.check_permissions(request)
File "/opt/virtualenvs/staging-mathesar/lib/python3.9/site-packages/rest_framework/views.py", line 332, in check_permissions
if not permission.has_permission(request, self):
File "/opt/virtualenvs/staging-mathesar/lib/python3.9/site-packages/rest_access_policy/access_policy.py", line 53, in has_permission
allowed = self._evaluate_statements(statements, request, view, action)
File "/opt/virtualenvs/staging-mathesar/lib/python3.9/site-packages/rest_access_policy/access_policy.py", line 101, in _evaluate_statements
matched = self._get_statements_matching_conditions(
File "/opt/virtualenvs/staging-mathesar/lib/python3.9/site-packages/rest_access_policy/access_policy.py", line 235, in _get_statements_matching_conditions
passed = bool(boolExpr.parseString(condition)[0])
File "/opt/virtualenvs/staging-mathesar/lib/python3.9/site-packages/rest_access_policy/parsing.py", line 32, in __bool__
return self.evalop(bool(a) for a in self.args)
File "/opt/virtualenvs/staging-mathesar/lib/python3.9/site-packages/rest_access_policy/parsing.py", line 32, in <genexpr>
return self.evalop(bool(a) for a in self.args)
File "/opt/virtualenvs/staging-mathesar/lib/python3.9/site-packages/rest_access_policy/parsing.py", line 14, in __bool__
return self.check_condition_fn(self.label)
File "/opt/virtualenvs/staging-mathesar/lib/python3.9/site-packages/rest_access_policy/access_policy.py", line 218, in <lambda>
check_cond_fn = lambda cond: self._check_condition(
File "/opt/virtualenvs/staging-mathesar/lib/python3.9/site-packages/rest_access_policy/access_policy.py", line 263, in _check_condition
result = method(request, view, action)
File "/var/www/staging.mathesar.org/mathesar/mathesar/api/db/permissions/records.py", line 37, in is_table_viewer
is_schema_viewer = SchemaRole.objects.filter(
File "/opt/virtualenvs/staging-mathesar/lib/python3.9/site-packages/django/db/models/manager.py", line 85, in manager_method
return getattr(self.get_queryset(), name)(*args, **kwargs)
File "/opt/virtualenvs/staging-mathesar/lib/python3.9/site-packages/django/db/models/query.py", line 942, in filter
return self._filter_or_exclude(False, *args, **kwargs)
File "/opt/virtualenvs/staging-mathesar/lib/python3.9/site-packages/django/db/models/query.py", line 962, in _filter_or_exclude
clone._filter_or_exclude_inplace(negate, *args, **kwargs)
File "/opt/virtualenvs/staging-mathesar/lib/python3.9/site-packages/django/db/models/query.py", line 969, in _filter_or_exclude_inplace
self._query.add_q(Q(*args, **kwargs))
File "/opt/virtualenvs/staging-mathesar/lib/python3.9/site-packages/django/db/models/sql/query.py", line 1360, in add_q
clause, _ = self._add_q(q_object, self.used_aliases)
File "/opt/virtualenvs/staging-mathesar/lib/python3.9/site-packages/django/db/models/sql/query.py", line 1379, in _add_q
child_clause, needed_inner = self.build_filter(
File "/opt/virtualenvs/staging-mathesar/lib/python3.9/site-packages/django/db/models/sql/query.py", line 1321, in build_filter
condition = self.build_lookup(lookups, col, value)
File "/opt/virtualenvs/staging-mathesar/lib/python3.9/site-packages/django/db/models/sql/query.py", line 1167, in build_lookup
lookup = lookup_class(lhs, rhs)
File "/opt/virtualenvs/staging-mathesar/lib/python3.9/site-packages/django/db/models/lookups.py", line 24, in __init__
self.rhs = self.get_prep_lookup()
File "/opt/virtualenvs/staging-mathesar/lib/python3.9/site-packages/django/db/models/fields/related_lookups.py", line 117, in get_prep_lookup
self.rhs = target_field.get_prep_value(self.rhs)
File "/opt/virtualenvs/staging-mathesar/lib/python3.9/site-packages/django/db/models/fields/__init__.py", line 1776, in get_prep_value
raise e.__class__(
Exception Type: TypeError at /api/db/v0/tables/12454/records/
Exception Value: Field 'id' expected a number but got <django.contrib.auth.models.AnonymousUser object at 0x7fe07d769820>.
```
</details>
CC @mathemancer @kgodey
I'm putting this in the First Release milestone, but only because I think someone from the backend team should look at it and then decide how to better prioritize it.
| main | auth problem with the records endpoint after session expiry after my session on the staging site expired i pressed the refresh ui button within the table page the columns and constraints endpoints responded appropriately with an http and the following body json code message authentication credentials were not provided details exception authentication credentials were not provided but the records endpoint gave me an http and the following django error traceback environment request method get request url django version python version installed applications django contrib admin django contrib auth django contrib contenttypes django contrib sessions django contrib messages django contrib staticfiles rest framework django filters django property filter mathesar installed middleware django middleware security securitymiddleware django contrib sessions middleware sessionmiddleware django middleware common commonmiddleware django middleware csrf csrfviewmiddleware django contrib auth middleware authenticationmiddleware django contrib messages middleware messagemiddleware django middleware clickjacking xframeoptionsmiddleware mathesar middleware cursorclosedhandlermiddleware mathesar middleware passwordchangeneededmiddleware django userforeignkey middleware userforeignkeymiddleware django request cache middleware requestcachemiddleware traceback most recent call last file opt virtualenvs staging mathesar lib site packages django db models fields init py line in get prep value return int value file opt virtualenvs staging mathesar lib site packages django contrib auth models py line in int raise typeerror cannot cast anonymoususer to int are you trying to use it in place of user the above exception cannot cast anonymoususer to int are you trying to use it in place of user was the direct cause of the following exception file opt virtualenvs staging mathesar lib site packages django core handlers exception py line in inner response get response request file opt virtualenvs staging mathesar lib site packages django core handlers base py line in get response response wrapped callback request callback args callback kwargs file opt virtualenvs staging mathesar lib site packages django views decorators csrf py line in wrapped view return view func args kwargs file opt virtualenvs staging mathesar lib site packages rest framework viewsets py line in view return self dispatch request args kwargs file opt virtualenvs staging mathesar lib site packages rest framework views py line in dispatch response self handle exception exc file opt virtualenvs staging mathesar lib site packages rest framework views py line in handle exception response exception handler exc context file var www staging mathesar org mathesar mathesar exception handlers py line in mathesar exception handler raise exc file opt virtualenvs staging mathesar lib site packages rest framework views py line in dispatch self initial request args kwargs file opt virtualenvs staging mathesar lib site packages rest framework views py line in initial self check permissions request file opt virtualenvs staging mathesar lib site packages rest framework views py line in check permissions if not permission has permission request self file opt virtualenvs staging mathesar lib site packages rest access policy access policy py line in has permission allowed self evaluate statements statements request view action file opt virtualenvs staging mathesar lib site packages rest access policy access policy py line in evaluate statements matched self get statements matching conditions file opt virtualenvs staging mathesar lib site packages rest access policy access policy py line in get statements matching conditions passed bool boolexpr parsestring condition file opt virtualenvs staging mathesar lib site packages rest access policy parsing py line in bool return self evalop bool a for a in self args file opt virtualenvs staging mathesar lib site packages rest access policy parsing py line in return self evalop bool a for a in self args file opt virtualenvs staging mathesar lib site packages rest access policy parsing py line in bool return self check condition fn self label file opt virtualenvs staging mathesar lib site packages rest access policy access policy py line in check cond fn lambda cond self check condition file opt virtualenvs staging mathesar lib site packages rest access policy access policy py line in check condition result method request view action file var www staging mathesar org mathesar mathesar api db permissions records py line in is table viewer is schema viewer schemarole objects filter file opt virtualenvs staging mathesar lib site packages django db models manager py line in manager method return getattr self get queryset name args kwargs file opt virtualenvs staging mathesar lib site packages django db models query py line in filter return self filter or exclude false args kwargs file opt virtualenvs staging mathesar lib site packages django db models query py line in filter or exclude clone filter or exclude inplace negate args kwargs file opt virtualenvs staging mathesar lib site packages django db models query py line in filter or exclude inplace self query add q q args kwargs file opt virtualenvs staging mathesar lib site packages django db models sql query py line in add q clause self add q q object self used aliases file opt virtualenvs staging mathesar lib site packages django db models sql query py line in add q child clause needed inner self build filter file opt virtualenvs staging mathesar lib site packages django db models sql query py line in build filter condition self build lookup lookups col value file opt virtualenvs staging mathesar lib site packages django db models sql query py line in build lookup lookup lookup class lhs rhs file opt virtualenvs staging mathesar lib site packages django db models lookups py line in init self rhs self get prep lookup file opt virtualenvs staging mathesar lib site packages django db models fields related lookups py line in get prep lookup self rhs target field get prep value self rhs file opt virtualenvs staging mathesar lib site packages django db models fields init py line in get prep value raise e class exception type typeerror at api db tables records exception value field id expected a number but got cc mathemancer kgodey i m putting this in the first release milestone but only because i think someone from the backend team should look at it and then decide how to better prioritize it | 1 |
808,390 | 30,081,309,995 | IssuesEvent | 2023-06-29 03:49:00 | oilshell/oil | https://api.github.com/repos/oilshell/oil | closed | Oil expression syntax after V1 | oil-language low-priority maybe-new-syntax | Deferring this stuff for now.
High priority:
- [ ] suffixes on numeric literals: `100 Mi` could be sugar for `100 * Mi`, where `Mi = 1 << 20`
- [ ] float literals can have `000_000` like integers
- [x] multiline strings
Low Priority:
- ~~dataflow syntax~~ `=>`
- including mixing byte streams and structured data in the same pipeline with `|=>` and `=>|`
- this might be replaced by #955: lazy arg lists with `[]`
- ~~Since we have D->key, maybe have D?->key for the "null safe" version?~~
- PHP 8 has it https://www.php.net/releases/8.0/en.php
- 2023: `d->get()` should be enough
- [ ] dict comprehensions
- [ ] `:symbol`
- [ ] assignment expressions: `if (var x = ...)`, `if (set x = ...)`
- 2023: meh
- ~~HTML literals ? Or maybe procs and blocks are better~~
- 2023: Markaby HTML
Tea stuff:
- [ ] `;` and `=>` for `Func` type expressions
- [ ] `as` for casting, or "type ascription"? `@[] as Array[Int]` vs `Array[Int]()`?
- [ ] `new` for instantiating objects and records? `new Point(3, 5)`. Is it optional?
- [ ] match/case as expression? (in addition to statement)
Decided against:
- ~~JS-like template strings.~~ -- suffix for escaping
Ideas:
- [ ] Other "assignment" operators
- Make has conditional assignment, `PYTHONPATH ?= '.'`
- Grammars have `::=`
- Go has `:=`
- Prolog has `:-`
- Problem: all of these could be confused with commands now. For example, `cmd :=`
- but we could DISALLOW `:` and `?` in the first word as we do with `=`. And then we can look
ahead to the second word and detect these?
| 1.0 | Oil expression syntax after V1 - Deferring this stuff for now.
High priority:
- [ ] suffixes on numeric literals: `100 Mi` could be sugar for `100 * Mi`, where `Mi = 1 << 20`
- [ ] float literals can have `000_000` like integers
- [x] multiline strings
Low Priority:
- ~~dataflow syntax~~ `=>`
- including mixing byte streams and structured data in the same pipeline with `|=>` and `=>|`
- this might be replaced by #955: lazy arg lists with `[]`
- ~~Since we have D->key, maybe have D?->key for the "null safe" version?~~
- PHP 8 has it https://www.php.net/releases/8.0/en.php
- 2023: `d->get()` should be enough
- [ ] dict comprehensions
- [ ] `:symbol`
- [ ] assignment expressions: `if (var x = ...)`, `if (set x = ...)`
- 2023: meh
- ~~HTML literals ? Or maybe procs and blocks are better~~
- 2023: Markaby HTML
Tea stuff:
- [ ] `;` and `=>` for `Func` type expressions
- [ ] `as` for casting, or "type ascription"? `@[] as Array[Int]` vs `Array[Int]()`?
- [ ] `new` for instantiating objects and records? `new Point(3, 5)`. Is it optional?
- [ ] match/case as expression? (in addition to statement)
Decided against:
- ~~JS-like template strings.~~ -- suffix for escaping
Ideas:
- [ ] Other "assignment" operators
- Make has conditional assignment, `PYTHONPATH ?= '.'`
- Grammars have `::=`
- Go has `:=`
- Prolog has `:-`
- Problem: all of these could be confused with commands now. For example, `cmd :=`
- but we could DISALLOW `:` and `?` in the first word as we do with `=`. And then we can look
ahead to the second word and detect these?
| non_main | oil expression syntax after deferring this stuff for now high priority suffixes on numeric literals mi could be sugar for mi where mi float literals can have like integers multiline strings low priority dataflow syntax including mixing byte streams and structured data in the same pipeline with and this might be replaced by lazy arg lists with since we have d key maybe have d key for the null safe version php has it d get should be enough dict comprehensions symbol assignment expressions if var x if set x meh html literals or maybe procs and blocks are better markaby html tea stuff and for func type expressions as for casting or type ascription as array vs array new for instantiating objects and records new point is it optional match case as expression in addition to statement decided against js like template strings suffix for escaping ideas other assignment operators make has conditional assignment pythonpath grammars have go has prolog has problem all of these could be confused with commands now for example cmd but we could disallow and in the first word as we do with and then we can look ahead to the second word and detect these | 0 |
197,670 | 6,962,483,006 | IssuesEvent | 2017-12-08 13:56:20 | vanilla-framework/vanilla-framework | https://api.github.com/repos/vanilla-framework/vanilla-framework | opened | Add prefixes to appearence for form elelment | Priority: Medium Type: Bug | For projects without autoprefixer the form elements styling will not be reset. For example the drop down.
```css
-webkit-appearance: none;
-moz-appearance: none;
appearance: none;
``` | 1.0 | Add prefixes to appearence for form elelment - For projects without autoprefixer the form elements styling will not be reset. For example the drop down.
```css
-webkit-appearance: none;
-moz-appearance: none;
appearance: none;
``` | non_main | add prefixes to appearence for form elelment for projects without autoprefixer the form elements styling will not be reset for example the drop down css webkit appearance none moz appearance none appearance none | 0 |
5,710 | 30,186,521,841 | IssuesEvent | 2023-07-04 12:30:10 | RfastOfficial/Rfast | https://api.github.com/repos/RfastOfficial/Rfast | closed | Rfast::Rowsort | bug-to-unmaintained-package | I'd like to ask whether you could provide a Rfast version that contain the Rowsort function because this function is needed when infering the putative gene regulatory network (GRN) in the bigScale2 package. AND here is the error:
> results.ctl=compute.network(expr.data = expr.ctl,gene.names = gene.names)
[1] "Pre-processing) Removing null rows "
[1] "Discarding 4182 genes with all zero values"
[1] "PASSAGE 1) Setting the size factors ...."
[1] "PASSAGE 2) Setting the bins for the expression data ...."
[1] "Creating edges..."
[1] "47.2 % of elements < 10 counts, therefore Using a reads compatible binning"
[1] "PASSAGE 3) Storing in the single cell object the Normalized data ...."
[1] "PASSAGE 4) Computing the numerical model (can take from a few minutes to 30 mins) ...."
[1] 22089 1313
[1] "I remove 1349 genes not expressed enough"
[1] "Calculating normalized-transformed matrix ..."
[1] "Computing transformed matrix ..."
[1] "Converting sparse to full Matrix ..."
[1] "Normalizing expression gene by gene ..."
[1] "Calculating Pearson correlations ..."
[1] "Clustering ..."
[1] "Calculating optimal cut of dendrogram for pre-clustering"
[1] "We are here"
[1] "Pre-clustering: cutting the tree at 6.00 %: 15 pre-clusters of median(mean) size 84 (87.5333)"
[1] "Computed Numerical Model. Enumerated a total of 1.11135e+09 cases"
[1] "PASSAGE 5) Clustering ..."
[1] "Clustering cells down to groups of approximately 50-250 cells"
Recursive clustering, beginning round 1 ....[1] "Analyzing 1313 cells for ODgenes, min_ODscore=2.00"
Error: 'rowSort' is not an exported object from 'namespace:Rfast'
And I have no where to find the Rfast version containing Rowsort function in CRAN. So I wonder if you could offer the loading url. | True | Rfast::Rowsort - I'd like to ask whether you could provide a Rfast version that contain the Rowsort function because this function is needed when infering the putative gene regulatory network (GRN) in the bigScale2 package. AND here is the error:
> results.ctl=compute.network(expr.data = expr.ctl,gene.names = gene.names)
[1] "Pre-processing) Removing null rows "
[1] "Discarding 4182 genes with all zero values"
[1] "PASSAGE 1) Setting the size factors ...."
[1] "PASSAGE 2) Setting the bins for the expression data ...."
[1] "Creating edges..."
[1] "47.2 % of elements < 10 counts, therefore Using a reads compatible binning"
[1] "PASSAGE 3) Storing in the single cell object the Normalized data ...."
[1] "PASSAGE 4) Computing the numerical model (can take from a few minutes to 30 mins) ...."
[1] 22089 1313
[1] "I remove 1349 genes not expressed enough"
[1] "Calculating normalized-transformed matrix ..."
[1] "Computing transformed matrix ..."
[1] "Converting sparse to full Matrix ..."
[1] "Normalizing expression gene by gene ..."
[1] "Calculating Pearson correlations ..."
[1] "Clustering ..."
[1] "Calculating optimal cut of dendrogram for pre-clustering"
[1] "We are here"
[1] "Pre-clustering: cutting the tree at 6.00 %: 15 pre-clusters of median(mean) size 84 (87.5333)"
[1] "Computed Numerical Model. Enumerated a total of 1.11135e+09 cases"
[1] "PASSAGE 5) Clustering ..."
[1] "Clustering cells down to groups of approximately 50-250 cells"
Recursive clustering, beginning round 1 ....[1] "Analyzing 1313 cells for ODgenes, min_ODscore=2.00"
Error: 'rowSort' is not an exported object from 'namespace:Rfast'
And I have no where to find the Rfast version containing Rowsort function in CRAN. So I wonder if you could offer the loading url. | main | rfast rowsort i d like to ask whether you could provide a rfast version that contain the rowsort function because this function is needed when infering the putative gene regulatory network grn in the package and here is the error results ctl compute network expr data expr ctl gene names gene names pre processing removing null rows discarding genes with all zero values passage setting the size factors passage setting the bins for the expression data creating edges of elements counts therefore using a reads compatible binning passage storing in the single cell object the normalized data passage computing the numerical model can take from a few minutes to mins i remove genes not expressed enough calculating normalized transformed matrix computing transformed matrix converting sparse to full matrix normalizing expression gene by gene calculating pearson correlations clustering calculating optimal cut of dendrogram for pre clustering we are here pre clustering cutting the tree at pre clusters of median mean size computed numerical model enumerated a total of cases passage clustering clustering cells down to groups of approximately cells recursive clustering beginning round analyzing cells for odgenes min odscore error rowsort is not an exported object from namespace rfast and i have no where to find the rfast version containing rowsort function in cran so i wonder if you could offer the loading url | 1 |
1,822 | 6,577,329,897 | IssuesEvent | 2017-09-12 00:09:10 | ansible/ansible-modules-core | https://api.github.com/repos/ansible/ansible-modules-core | closed | IAM user can not go from N to 0 groups. | affects_2.0 aws bug_report cloud waiting_on_maintainer | <!--- Verify first that your issue/request is not already reported in GitHub -->
##### ISSUE TYPE
<!--- Pick one below and delete the rest: -->
- Bug Report
##### COMPONENT NAME
cloud/amazon/iam
<!--- Name of the plugin/module/task -->
##### ANSIBLE VERSION
<!--- Paste verbatim output from “ansible --version” between quotes below -->
```
ansible 2.0.2.0
config file = /home/sbrady/.ansible.cfg
configured module search path = Default w/o overrides
```
##### CONFIGURATION
<!---
Mention any settings you have changed/added/removed in ansible.cfg
(or using the ANSIBLE_* environment variables).
-->
```
$ cat ~/.ansible.cfg
[defaults]
nocows=1
[ssh_connection]
pipelining = True
```
##### OS / ENVIRONMENT
<!---
Mention the OS you are running Ansible from, and the OS you are
managing, or say “N/A” for anything that is not platform-specific.
-->
Linux but issue should not be platform specific.
##### SUMMARY
<!--- Explain the problem briefly -->
When trying to change group membership of a user from one or more groups, to no groups, no groups are changed.
##### STEPS TO REPRODUCE
<!---
For bugs, show exactly how to reproduce the problem.
For new features, show how the feature would be used.
-->
<!--- Paste example playbooks or commands between quotes below -->
```
iam:
iam_type: "user"
name: "joe"
groups: []
```
<!--- You can also paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- What did you expect to happen when running the steps above? -->
Expected "joe" to no longer be in the "foo" group.
##### ACTUAL RESULTS
<!--- What actually happened? If possible run with extra verbosity (-vvvv) -->
"joe" remained in the "foo" group.
Examining the code, I see the issue.
https://github.com/ansible/ansible-modules-core/blob/a8e5f27b2c27eabc3a9fff9c3719da6ea1fb489d/cloud/amazon/iam.py#L683
The module uses `if groups:`, where groups is a list. Any empty list ("I want this user to be in no groups") will evaluate to `False`, and therefore the block will not execute. I believe the author meant to check if the parameter had been passed at all.
Please advise if I am mis-using the module, or can provide more information.
Thanks.
| True | IAM user can not go from N to 0 groups. - <!--- Verify first that your issue/request is not already reported in GitHub -->
##### ISSUE TYPE
<!--- Pick one below and delete the rest: -->
- Bug Report
##### COMPONENT NAME
cloud/amazon/iam
<!--- Name of the plugin/module/task -->
##### ANSIBLE VERSION
<!--- Paste verbatim output from “ansible --version” between quotes below -->
```
ansible 2.0.2.0
config file = /home/sbrady/.ansible.cfg
configured module search path = Default w/o overrides
```
##### CONFIGURATION
<!---
Mention any settings you have changed/added/removed in ansible.cfg
(or using the ANSIBLE_* environment variables).
-->
```
$ cat ~/.ansible.cfg
[defaults]
nocows=1
[ssh_connection]
pipelining = True
```
##### OS / ENVIRONMENT
<!---
Mention the OS you are running Ansible from, and the OS you are
managing, or say “N/A” for anything that is not platform-specific.
-->
Linux but issue should not be platform specific.
##### SUMMARY
<!--- Explain the problem briefly -->
When trying to change group membership of a user from one or more groups, to no groups, no groups are changed.
##### STEPS TO REPRODUCE
<!---
For bugs, show exactly how to reproduce the problem.
For new features, show how the feature would be used.
-->
<!--- Paste example playbooks or commands between quotes below -->
```
iam:
iam_type: "user"
name: "joe"
groups: []
```
<!--- You can also paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- What did you expect to happen when running the steps above? -->
Expected "joe" to no longer be in the "foo" group.
##### ACTUAL RESULTS
<!--- What actually happened? If possible run with extra verbosity (-vvvv) -->
"joe" remained in the "foo" group.
Examining the code, I see the issue.
https://github.com/ansible/ansible-modules-core/blob/a8e5f27b2c27eabc3a9fff9c3719da6ea1fb489d/cloud/amazon/iam.py#L683
The module uses `if groups:`, where groups is a list. Any empty list ("I want this user to be in no groups") will evaluate to `False`, and therefore the block will not execute. I believe the author meant to check if the parameter had been passed at all.
Please advise if I am mis-using the module, or can provide more information.
Thanks.
| main | iam user can not go from n to groups issue type bug report component name cloud amazon iam ansible version ansible config file home sbrady ansible cfg configured module search path default w o overrides configuration mention any settings you have changed added removed in ansible cfg or using the ansible environment variables cat ansible cfg nocows pipelining true os environment mention the os you are running ansible from and the os you are managing or say “n a” for anything that is not platform specific linux but issue should not be platform specific summary when trying to change group membership of a user from one or more groups to no groups no groups are changed steps to reproduce for bugs show exactly how to reproduce the problem for new features show how the feature would be used iam iam type user name joe groups expected results expected joe to no longer be in the foo group actual results joe remained in the foo group examining the code i see the issue the module uses if groups where groups is a list any empty list i want this user to be in no groups will evaluate to false and therefore the block will not execute i believe the author meant to check if the parameter had been passed at all please advise if i am mis using the module or can provide more information thanks | 1 |
5,694 | 2,578,193,096 | IssuesEvent | 2015-02-12 21:39:39 | SiCKRAGETV/sickrage-issues | https://api.github.com/repos/SiCKRAGETV/sickrage-issues | closed | KODI notification settings reset/cleared after update | 1: Bug / issue 2: Low Priority 3: Confirmed 3: Fix branch: master | Branch: master
Commit hash: 33e95870e0
Your operating system and python version: Ubuntu 3.13.0-43 (XBMbuntu) , python 2.7.6
What you did: updated via web interface by clicking the "Update Now" link
What happened: all the KODI notification settings were set back to defaults and KODI notifications were turned off
What you expected: all the KODI notification settings to remain at what I set them to before the update
No logs for this one. This has always happened for me whenever updating (at least the last 3 or 4 updates), but I'm just getting around to reporting it. I don't use any other notification methods so I can't tell if any of those also get reset. | 1.0 | KODI notification settings reset/cleared after update - Branch: master
Commit hash: 33e95870e0
Your operating system and python version: Ubuntu 3.13.0-43 (XBMbuntu) , python 2.7.6
What you did: updated via web interface by clicking the "Update Now" link
What happened: all the KODI notification settings were set back to defaults and KODI notifications were turned off
What you expected: all the KODI notification settings to remain at what I set them to before the update
No logs for this one. This has always happened for me whenever updating (at least the last 3 or 4 updates), but I'm just getting around to reporting it. I don't use any other notification methods so I can't tell if any of those also get reset. | non_main | kodi notification settings reset cleared after update branch master commit hash your operating system and python version ubuntu xbmbuntu python what you did updated via web interface by clicking the update now link what happened all the kodi notification settings were set back to defaults and kodi notifications were turned off what you expected all the kodi notification settings to remain at what i set them to before the update no logs for this one this has always happened for me whenever updating at least the last or updates but i m just getting around to reporting it i don t use any other notification methods so i can t tell if any of those also get reset | 0 |
442,745 | 12,749,593,331 | IssuesEvent | 2020-06-26 23:26:54 | harryjubb/bee_iot | https://api.github.com/repos/harryjubb/bee_iot | closed | Serve the live stream as HLS | effort-3 priority-critical server | RTMP is not very browser compatible. Forward the RTMP publish to HLS (on a tmpfs), and serve this proxied behind SSL. | 1.0 | Serve the live stream as HLS - RTMP is not very browser compatible. Forward the RTMP publish to HLS (on a tmpfs), and serve this proxied behind SSL. | non_main | serve the live stream as hls rtmp is not very browser compatible forward the rtmp publish to hls on a tmpfs and serve this proxied behind ssl | 0 |
152,462 | 13,456,634,602 | IssuesEvent | 2020-09-09 08:07:45 | AzureAD/microsoft-identity-web | https://api.github.com/repos/AzureAD/microsoft-identity-web | closed | [Documentation] - Question about wiki page statement | documentation fixed | ### Documentation related to component
Wiki page "Why use Microsoft.Identity.Web?"
### Please check all that apply
- [ ] typo
- [ ] documentation doesn't exist
- [X] documentation needs clarification
- [ ] error(s) in the example
- [ ] needs an example
### Description of the issue
In the [Why use Microsoft.Identity.Web?](https://github.com/AzureAD/microsoft-identity-web/wiki/Microsoft-Identity-Web-basics) wiki page, it states (emphasis mine):
> Today, without Microsoft Identity Web, when doing `dotnet new --auth` and creating a Web App from an ASP.NET core template, the application is targeting the Azure AD v1.0 endpoint, which means sign-in with a work or school account is the only option for customers. **There is also no issuer validation happening when the token is returned.**
> The Web Apps and Web APIs that are created do not call downstream Web APIs, if a developer wanted to call a downstream Web API, they would need to leverage MSAL.
My understanding is that both the OpenID Connect and JWT Bearer authentication providers in ASP.NET Core do perform issuer validation, most of the time based on the value that is retrieved from the OpenID Connect metadata document.
Could you please elaborate on that statement?
| 1.0 | [Documentation] - Question about wiki page statement - ### Documentation related to component
Wiki page "Why use Microsoft.Identity.Web?"
### Please check all that apply
- [ ] typo
- [ ] documentation doesn't exist
- [X] documentation needs clarification
- [ ] error(s) in the example
- [ ] needs an example
### Description of the issue
In the [Why use Microsoft.Identity.Web?](https://github.com/AzureAD/microsoft-identity-web/wiki/Microsoft-Identity-Web-basics) wiki page, it states (emphasis mine):
> Today, without Microsoft Identity Web, when doing `dotnet new --auth` and creating a Web App from an ASP.NET core template, the application is targeting the Azure AD v1.0 endpoint, which means sign-in with a work or school account is the only option for customers. **There is also no issuer validation happening when the token is returned.**
> The Web Apps and Web APIs that are created do not call downstream Web APIs, if a developer wanted to call a downstream Web API, they would need to leverage MSAL.
My understanding is that both the OpenID Connect and JWT Bearer authentication providers in ASP.NET Core do perform issuer validation, most of the time based on the value that is retrieved from the OpenID Connect metadata document.
Could you please elaborate on that statement?
| non_main | question about wiki page statement documentation related to component wiki page why use microsoft identity web please check all that apply typo documentation doesn t exist documentation needs clarification error s in the example needs an example description of the issue in the wiki page it states emphasis mine today without microsoft identity web when doing dotnet new auth and creating a web app from an asp net core template the application is targeting the azure ad endpoint which means sign in with a work or school account is the only option for customers there is also no issuer validation happening when the token is returned the web apps and web apis that are created do not call downstream web apis if a developer wanted to call a downstream web api they would need to leverage msal my understanding is that both the openid connect and jwt bearer authentication providers in asp net core do perform issuer validation most of the time based on the value that is retrieved from the openid connect metadata document could you please elaborate on that statement | 0 |
1,437 | 6,227,702,132 | IssuesEvent | 2017-07-10 21:20:52 | duckduckgo/zeroclickinfo-spice | https://api.github.com/repos/duckduckgo/zeroclickinfo-spice | closed | Just Delete Me: wrong account info displayed | Maintainer Input Requested Relevancy Suggestion | When I search for "delete icloud account," I'm shown instructions on deleting a JoliCloud / JoiliDrive account.
https://duckduckgo.com/?q=delete+icloud+account&t=ffab&ia=answer
---
IA Page: http://duck.co/ia/view/just_delete_me
[Maintainer](http://docs.duckduckhack.com/maintaining/guidelines.html): @riqpe
| True | Just Delete Me: wrong account info displayed - When I search for "delete icloud account," I'm shown instructions on deleting a JoliCloud / JoiliDrive account.
https://duckduckgo.com/?q=delete+icloud+account&t=ffab&ia=answer
---
IA Page: http://duck.co/ia/view/just_delete_me
[Maintainer](http://docs.duckduckhack.com/maintaining/guidelines.html): @riqpe
| main | just delete me wrong account info displayed when i search for delete icloud account i m shown instructions on deleting a jolicloud joilidrive account ia page riqpe | 1 |
3,629 | 14,678,461,996 | IssuesEvent | 2020-12-31 03:25:30 | zaproxy/zaproxy | https://api.github.com/repos/zaproxy/zaproxy | opened | Add-ons Log4j 2.x Uplift | Maintainability add-on tracker | Align add-ons with core logging changes per:
* https://github.com/zaproxy/zaproxy/pull/6228
* https://github.com/zaproxy/zaproxy/pull/6276
* https://github.com/zaproxy/zaproxy/pull/6327
* https://github.com/zaproxy/zaproxy/pull/6327
See also: http://logging.apache.org/log4j/2.x/manual/api.html
- [ ] accessControl
- [ ] addOns
- [ ] alertFilters
- [ ] alertReport
- [ ] allinonenotes
- [ ] ascanrules
- [ ] ascanrulesAlpha
- [ ] ascanrulesBeta
- [ ] authstats
- [ ] beanshell
- [ ] birtreports
- [ ] browserView
- [ ] bruteforce
- [ ] bugtracker
- [ ] callgraph
- [ ] codedx
- [ ] commonlib
- [ ] customreport
- [ ] diff
- [ ] domxss
- [ ] encoder
- [ ] exportreport
- [ ] formhandler
- [ ] frontendscanner
- [ ] fuzz
- [ ] gettingStarted
- [ ] graphql
- [ ] httpsInfo
- [ ] imagelocationscanner
- [ ] importLogFiles
- [ ] importurls
- [ ] invoke
- [ ] jython
- [ ] openapi
- [ ] plugnhack
- [ ] portscan
- [ ] pscanrules
- [ ] pscanrulesAlpha
- [ ] pscanrulesBeta
- [ ] quickstart
- [ ] replacer
- [ ] requester
- [ ] retire
- [ ] reveal
- [ ] revisit
- [ ] saml
- [ ] saverawmessage
- [ ] savexmlmessage
- [ ] scripts
- [ ] selenium
- [ ] sequence
- [ ] simpleexample
- [ ] soap
- [ ] spiderAjax
- [ ] sqliplugin
- [ ] sse
- [ ] tlsdebug
- [ ] todo
- [ ] tokengen
- [ ] viewstate
- [ ] wappalyzer
- [ ] wavsepRpt
- [ ] websocket
- [ ] zestAddOn | True | Add-ons Log4j 2.x Uplift - Align add-ons with core logging changes per:
* https://github.com/zaproxy/zaproxy/pull/6228
* https://github.com/zaproxy/zaproxy/pull/6276
* https://github.com/zaproxy/zaproxy/pull/6327
* https://github.com/zaproxy/zaproxy/pull/6327
See also: http://logging.apache.org/log4j/2.x/manual/api.html
- [ ] accessControl
- [ ] addOns
- [ ] alertFilters
- [ ] alertReport
- [ ] allinonenotes
- [ ] ascanrules
- [ ] ascanrulesAlpha
- [ ] ascanrulesBeta
- [ ] authstats
- [ ] beanshell
- [ ] birtreports
- [ ] browserView
- [ ] bruteforce
- [ ] bugtracker
- [ ] callgraph
- [ ] codedx
- [ ] commonlib
- [ ] customreport
- [ ] diff
- [ ] domxss
- [ ] encoder
- [ ] exportreport
- [ ] formhandler
- [ ] frontendscanner
- [ ] fuzz
- [ ] gettingStarted
- [ ] graphql
- [ ] httpsInfo
- [ ] imagelocationscanner
- [ ] importLogFiles
- [ ] importurls
- [ ] invoke
- [ ] jython
- [ ] openapi
- [ ] plugnhack
- [ ] portscan
- [ ] pscanrules
- [ ] pscanrulesAlpha
- [ ] pscanrulesBeta
- [ ] quickstart
- [ ] replacer
- [ ] requester
- [ ] retire
- [ ] reveal
- [ ] revisit
- [ ] saml
- [ ] saverawmessage
- [ ] savexmlmessage
- [ ] scripts
- [ ] selenium
- [ ] sequence
- [ ] simpleexample
- [ ] soap
- [ ] spiderAjax
- [ ] sqliplugin
- [ ] sse
- [ ] tlsdebug
- [ ] todo
- [ ] tokengen
- [ ] viewstate
- [ ] wappalyzer
- [ ] wavsepRpt
- [ ] websocket
- [ ] zestAddOn | main | add ons x uplift align add ons with core logging changes per see also accesscontrol addons alertfilters alertreport allinonenotes ascanrules ascanrulesalpha ascanrulesbeta authstats beanshell birtreports browserview bruteforce bugtracker callgraph codedx commonlib customreport diff domxss encoder exportreport formhandler frontendscanner fuzz gettingstarted graphql httpsinfo imagelocationscanner importlogfiles importurls invoke jython openapi plugnhack portscan pscanrules pscanrulesalpha pscanrulesbeta quickstart replacer requester retire reveal revisit saml saverawmessage savexmlmessage scripts selenium sequence simpleexample soap spiderajax sqliplugin sse tlsdebug todo tokengen viewstate wappalyzer wavseprpt websocket zestaddon | 1 |
95,905 | 12,059,504,280 | IssuesEvent | 2020-04-15 19:23:51 | fablabbcn/fablabs.io | https://api.github.com/repos/fablabbcn/fablabs.io | closed | Show which Labs are now closed | Design | Once we will have complete (or almost complete) data about when Labs where created or closed from #108, we should clearly make a list of closed Labs and visualize them in a different way in their own page. This is important in order to keep an history of the network.
| 1.0 | Show which Labs are now closed - Once we will have complete (or almost complete) data about when Labs where created or closed from #108, we should clearly make a list of closed Labs and visualize them in a different way in their own page. This is important in order to keep an history of the network.
| non_main | show which labs are now closed once we will have complete or almost complete data about when labs where created or closed from we should clearly make a list of closed labs and visualize them in a different way in their own page this is important in order to keep an history of the network | 0 |
2,169 | 7,602,697,669 | IssuesEvent | 2018-04-29 05:05:23 | caskroom/homebrew-cask | https://api.github.com/repos/caskroom/homebrew-cask | opened | Casks with GPG stanzas | awaiting maintainer feedback meta | These need to be checked before https://github.com/Homebrew/brew/pull/4120 is merged so we don't get reports of install failures / incorrect signatures / "security issues".
- [ ] `1password-cli`
- [ ] `bisq`
- [ ] `bitpay`
- [ ] `borgbackup`
- [ ] `copay`
- [ ] `coyim`
- [ ] `electrum-ltc`
- [ ] `electrum`
- [ ] `espionage`
- [ ] `gpg-suite`
- [ ] `gpg-sync`
- [ ] `jameica`
- [ ] `keepassxc`
- [ ] `libreoffice-language-pack`
- [ ] `libreoffice`
- [ ] `litecoin`
- [ ] `lyx`
- [ ] `macports`
- [ ] `metasploit`
- [ ] `mullvadvpn`
- [ ] `multibit-hd`
- [ ] `mumble`
- [ ] `mysqlworkbench`
- [ ] `onionshare`
- [ ] `pgadmin4`
- [ ] `ricochet`
- [ ] `torbrowser`
- [ ] `tunnelblick`
- [ ] `vlc`
`versions`
- [ ] `gpg-suite-nightly`
- [ ] `libreoffice-rc`
- [ ] `libreoffice-still`
- [ ] `libreofficedev`
- [ ] `multibit-classic`
- [ ] `pgadmin3`
- [ ] `torbrowser-alpha`
- [ ] `tunnelblick-beta` | True | Casks with GPG stanzas - These need to be checked before https://github.com/Homebrew/brew/pull/4120 is merged so we don't get reports of install failures / incorrect signatures / "security issues".
- [ ] `1password-cli`
- [ ] `bisq`
- [ ] `bitpay`
- [ ] `borgbackup`
- [ ] `copay`
- [ ] `coyim`
- [ ] `electrum-ltc`
- [ ] `electrum`
- [ ] `espionage`
- [ ] `gpg-suite`
- [ ] `gpg-sync`
- [ ] `jameica`
- [ ] `keepassxc`
- [ ] `libreoffice-language-pack`
- [ ] `libreoffice`
- [ ] `litecoin`
- [ ] `lyx`
- [ ] `macports`
- [ ] `metasploit`
- [ ] `mullvadvpn`
- [ ] `multibit-hd`
- [ ] `mumble`
- [ ] `mysqlworkbench`
- [ ] `onionshare`
- [ ] `pgadmin4`
- [ ] `ricochet`
- [ ] `torbrowser`
- [ ] `tunnelblick`
- [ ] `vlc`
`versions`
- [ ] `gpg-suite-nightly`
- [ ] `libreoffice-rc`
- [ ] `libreoffice-still`
- [ ] `libreofficedev`
- [ ] `multibit-classic`
- [ ] `pgadmin3`
- [ ] `torbrowser-alpha`
- [ ] `tunnelblick-beta` | main | casks with gpg stanzas these need to be checked before is merged so we don t get reports of install failures incorrect signatures security issues cli bisq bitpay borgbackup copay coyim electrum ltc electrum espionage gpg suite gpg sync jameica keepassxc libreoffice language pack libreoffice litecoin lyx macports metasploit mullvadvpn multibit hd mumble mysqlworkbench onionshare ricochet torbrowser tunnelblick vlc versions gpg suite nightly libreoffice rc libreoffice still libreofficedev multibit classic torbrowser alpha tunnelblick beta | 1 |
1,757 | 6,574,984,203 | IssuesEvent | 2017-09-11 14:41:32 | ansible/ansible-modules-core | https://api.github.com/repos/ansible/ansible-modules-core | closed | Default deployment mode is inconistent with Azure ARM tools | affects_2.3 azure bug_report cloud waiting_on_maintainer | <!--- Verify first that your issue/request is not already reported in GitHub -->
##### ISSUE TYPE
<!--- Pick one below and delete the rest: -->
- Bug Report
##### COMPONENT NAME
https://docs.ansible.com/ansible/azure_rm_deployment_module.html
##### SUMMARY
<!--- Explain the problem briefly -->
The default mode for deployment should be incremental. This is the default mode of the native azure tools.
https://azure.microsoft.com/en-us/documentation/articles/resource-group-template-deploy/#incremental-and-complete-deployments
It seems dangerous to choose a destructive mode as the default.
NOTE - Please Do not close as being in the wrong repo. The module docs say it is an extra but my ticket there was closed - https://github.com/ansible/ansible-modules-extras/issues/3189
| True | Default deployment mode is inconistent with Azure ARM tools - <!--- Verify first that your issue/request is not already reported in GitHub -->
##### ISSUE TYPE
<!--- Pick one below and delete the rest: -->
- Bug Report
##### COMPONENT NAME
https://docs.ansible.com/ansible/azure_rm_deployment_module.html
##### SUMMARY
<!--- Explain the problem briefly -->
The default mode for deployment should be incremental. This is the default mode of the native azure tools.
https://azure.microsoft.com/en-us/documentation/articles/resource-group-template-deploy/#incremental-and-complete-deployments
It seems dangerous to choose a destructive mode as the default.
NOTE - Please Do not close as being in the wrong repo. The module docs say it is an extra but my ticket there was closed - https://github.com/ansible/ansible-modules-extras/issues/3189
| main | default deployment mode is inconistent with azure arm tools issue type bug report component name summary the default mode for deployment should be incremental this is the default mode of the native azure tools it seems dangerous to choose a destructive mode as the default note please do not close as being in the wrong repo the module docs say it is an extra but my ticket there was closed | 1 |
4,970 | 25,535,662,606 | IssuesEvent | 2022-11-29 11:47:04 | RalfKoban/MiKo-Analyzers | https://api.github.com/repos/RalfKoban/MiKo-Analyzers | opened | Report train wrecks as violation of information hiding principle | feature Area: analyzer Area: maintainability feasability unclear | We should report warnings for so-called train wrecks that violate the information hiding principle.
Train wrecks are calls such as `var x = A.B.C.D.E.F.G.h()` where A - G are properties or invocations.
The reason for the warning is that there is too much information involved and spread across the code. To call `h()` you need to know that there is an `A` which has a `B` which in turn has a `C` that has a `D` ... | True | Report train wrecks as violation of information hiding principle - We should report warnings for so-called train wrecks that violate the information hiding principle.
Train wrecks are calls such as `var x = A.B.C.D.E.F.G.h()` where A - G are properties or invocations.
The reason for the warning is that there is too much information involved and spread across the code. To call `h()` you need to know that there is an `A` which has a `B` which in turn has a `C` that has a `D` ... | main | report train wrecks as violation of information hiding principle we should report warnings for so called train wrecks that violate the information hiding principle train wrecks are calls such as var x a b c d e f g h where a g are properties or invocations the reason for the warning is that there is too much information involved and spread across the code to call h you need to know that there is an a which has a b which in turn has a c that has a d | 1 |
1,847 | 6,577,385,522 | IssuesEvent | 2017-09-12 00:32:40 | ansible/ansible-modules-core | https://api.github.com/repos/ansible/ansible-modules-core | closed | ec2_asg module sporadically fails to get async notification | affects_2.0 aws bug_report cloud waiting_on_maintainer | ##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
ec2_asg
##### ANSIBLE VERSION
```
ansible 2.0.0.2
```
##### CONFIGURATION
##### OS / ENVIRONMENT
Running from: Ubuntu Linux
Managing: Ubuntu Linux
##### SUMMARY
We (seeming randomly) get cases where we'll do a blue/green deploy using the "ec2_asg" module and have an async task waiting for the module to return a result. The task that waits for the result never gets the async notification and therefore fails despite the deploy succeeding.
##### STEPS TO REPRODUCE
- Create a new launch config (our new "blue" deploy)
- Run the "ec2_asg" task with the new launch config (with async set and "poll: 0")
- Have a task later in the playbook waiting on the result
- Confirm that the deploy succeeds in AWS (new instances brought up, old ones terminated)
- See that the "async_status" job never gets the notification that the deploy has happened
```
- name: Create integration tier launch configuration
ec2_lc:
name: "{{ environ }}-int-launch-config-{{ current_time }}"
[OMITTED FOR BREVITY]
register: int_launch_configuration
- name: Create Integration Autoscaling group
ec2_asg:
name: "{{ environ }}-int-asg"
launch_config_name: "{{ environ }}-int-launch-config-{{ current_time }}"
vpc_zone_identifier: "{{ int_subnets }}"
health_check_type: "ELB"
health_check_period: 400
termination_policies: "OldestInstance"
replace_all_instances: yes
wait_timeout: 2400
replace_batch_size: "{{ int_replace_batch_size }}"
async: 1000
poll: 0
register: int_asg_sleeper
- name: 'int ASG - check on fire and forget task'
async_status: jid={{ int_asg_sleeper.ansible_job_id }}
register: int_asg_job_result
until: int_asg_job_result.finished
retries: 60
delay: 15
```
##### EXPECTED RESULTS
Expected that when the deploy succeeds and the "old" instances are terminated, the Async job gets the message and reports success.
##### ACTUAL RESULTS
It appears that the "file" mechanism which Python/Ansible use for checking on the status of background jobs fails and the file is never populated, despite the job finishing. Therefore the job polling the file times out eventually.
```
08:03:34.063 TASK [launch-config : int ASG - check on fire and forget task] *****************
08:03:34.130 fatal: [localhost]: FAILED! => {"failed": true, "msg": "ERROR! The conditional check 'int_asg_job_result.finished' failed. The error was: ERROR! error while evaluating conditional (int_asg_job_result.finished): ERROR! 'dict object' has no attribute 'finished'"}
```
| True | ec2_asg module sporadically fails to get async notification - ##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
ec2_asg
##### ANSIBLE VERSION
```
ansible 2.0.0.2
```
##### CONFIGURATION
##### OS / ENVIRONMENT
Running from: Ubuntu Linux
Managing: Ubuntu Linux
##### SUMMARY
We (seeming randomly) get cases where we'll do a blue/green deploy using the "ec2_asg" module and have an async task waiting for the module to return a result. The task that waits for the result never gets the async notification and therefore fails despite the deploy succeeding.
##### STEPS TO REPRODUCE
- Create a new launch config (our new "blue" deploy)
- Run the "ec2_asg" task with the new launch config (with async set and "poll: 0")
- Have a task later in the playbook waiting on the result
- Confirm that the deploy succeeds in AWS (new instances brought up, old ones terminated)
- See that the "async_status" job never gets the notification that the deploy has happened
```
- name: Create integration tier launch configuration
ec2_lc:
name: "{{ environ }}-int-launch-config-{{ current_time }}"
[OMITTED FOR BREVITY]
register: int_launch_configuration
- name: Create Integration Autoscaling group
ec2_asg:
name: "{{ environ }}-int-asg"
launch_config_name: "{{ environ }}-int-launch-config-{{ current_time }}"
vpc_zone_identifier: "{{ int_subnets }}"
health_check_type: "ELB"
health_check_period: 400
termination_policies: "OldestInstance"
replace_all_instances: yes
wait_timeout: 2400
replace_batch_size: "{{ int_replace_batch_size }}"
async: 1000
poll: 0
register: int_asg_sleeper
- name: 'int ASG - check on fire and forget task'
async_status: jid={{ int_asg_sleeper.ansible_job_id }}
register: int_asg_job_result
until: int_asg_job_result.finished
retries: 60
delay: 15
```
##### EXPECTED RESULTS
Expected that when the deploy succeeds and the "old" instances are terminated, the Async job gets the message and reports success.
##### ACTUAL RESULTS
It appears that the "file" mechanism which Python/Ansible use for checking on the status of background jobs fails and the file is never populated, despite the job finishing. Therefore the job polling the file times out eventually.
```
08:03:34.063 TASK [launch-config : int ASG - check on fire and forget task] *****************
08:03:34.130 fatal: [localhost]: FAILED! => {"failed": true, "msg": "ERROR! The conditional check 'int_asg_job_result.finished' failed. The error was: ERROR! error while evaluating conditional (int_asg_job_result.finished): ERROR! 'dict object' has no attribute 'finished'"}
```
| main | asg module sporadically fails to get async notification issue type bug report component name asg ansible version ansible configuration os environment running from ubuntu linux managing ubuntu linux summary we seeming randomly get cases where we ll do a blue green deploy using the asg module and have an async task waiting for the module to return a result the task that waits for the result never gets the async notification and therefore fails despite the deploy succeeding steps to reproduce create a new launch config our new blue deploy run the asg task with the new launch config with async set and poll have a task later in the playbook waiting on the result confirm that the deploy succeeds in aws new instances brought up old ones terminated see that the async status job never gets the notification that the deploy has happened name create integration tier launch configuration lc name environ int launch config current time register int launch configuration name create integration autoscaling group asg name environ int asg launch config name environ int launch config current time vpc zone identifier int subnets health check type elb health check period termination policies oldestinstance replace all instances yes wait timeout replace batch size int replace batch size async poll register int asg sleeper name int asg check on fire and forget task async status jid int asg sleeper ansible job id register int asg job result until int asg job result finished retries delay expected results expected that when the deploy succeeds and the old instances are terminated the async job gets the message and reports success actual results it appears that the file mechanism which python ansible use for checking on the status of background jobs fails and the file is never populated despite the job finishing therefore the job polling the file times out eventually task fatal failed failed true msg error the conditional check int asg job result finished failed the error was error error while evaluating conditional int asg job result finished error dict object has no attribute finished | 1 |
2,587 | 8,798,373,601 | IssuesEvent | 2018-12-24 07:08:42 | dgets/nightMiner | https://api.github.com/repos/dgets/nightMiner | opened | Properly assign destinations prior to early_blockade() being activated | bug enhancement maintainability | I'm noticing, as of this last commit (that being edf515dced7c2f8bafa3e2774d33a2722a161ea1), that as of _nightMiner.py_, line 68, that though the _early_blockade_ routine can be accessed here, none of the destinations have been set up for any ships that are going to be working on it yet. Clearly I got sidetracked and forgot to properly note where I had left off on things.
I'm not sure if this is responsible for any of the issues from an improperly set `current_assignments[id].destination` value just yet, but fixing it along the way just in case, and because it's sure to introduce new bugs at some point here. | True | Properly assign destinations prior to early_blockade() being activated - I'm noticing, as of this last commit (that being edf515dced7c2f8bafa3e2774d33a2722a161ea1), that as of _nightMiner.py_, line 68, that though the _early_blockade_ routine can be accessed here, none of the destinations have been set up for any ships that are going to be working on it yet. Clearly I got sidetracked and forgot to properly note where I had left off on things.
I'm not sure if this is responsible for any of the issues from an improperly set `current_assignments[id].destination` value just yet, but fixing it along the way just in case, and because it's sure to introduce new bugs at some point here. | main | properly assign destinations prior to early blockade being activated i m noticing as of this last commit that being that as of nightminer py line that though the early blockade routine can be accessed here none of the destinations have been set up for any ships that are going to be working on it yet clearly i got sidetracked and forgot to properly note where i had left off on things i m not sure if this is responsible for any of the issues from an improperly set current assignments destination value just yet but fixing it along the way just in case and because it s sure to introduce new bugs at some point here | 1 |
796,565 | 28,118,427,190 | IssuesEvent | 2023-03-31 12:36:06 | webcompat/web-bugs | https://api.github.com/repos/webcompat/web-bugs | closed | admanager.google.com - see bug description | status-needsinfo browser-firefox priority-critical os-mac engine-gecko | <!-- @browser: Firefox 110.0 -->
<!-- @ua_header: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:109.0) Gecko/20100101 Firefox/110.0 -->
<!-- @reported_with: unknown -->
**URL**: https://admanager.google.com/8663477#delivery/line_item/detail/line_item_id=6228226242&order_id=3159041016&li_tab=settings&sort_by=StartDateTime&sort_asc=false
**Browser / Version**: Firefox 110.0
**Operating System**: Mac OS X 10.15
**Tested Another Browser**: Yes Chrome
**Problem type**: Something else
**Description**: Only your browser is affeccting the PL line
**Steps to Reproduce**:
Whenever I save the PL in the line item level it goes back to zero. These are live campaigns and it only happened this week. These campaigns have been running all quarter. On the campaign level it shows the PL that I had put in the line, but at the line level it shows zero and when I change it to say 8 and hit save it will revert back to zero. It does not show this in Chrome.
<details>
<summary>View the screenshot</summary>
<img alt="Screenshot" src="https://webcompat.com/uploads/2023/3/15df21d7-4a5a-4d4b-817a-c6258881f8fc.jpg">
</details>
<details>
<summary>Browser Configuration</summary>
<ul>
<li>None</li>
</ul>
</details>
_From [webcompat.com](https://webcompat.com/) with ❤️_ | 1.0 | admanager.google.com - see bug description - <!-- @browser: Firefox 110.0 -->
<!-- @ua_header: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:109.0) Gecko/20100101 Firefox/110.0 -->
<!-- @reported_with: unknown -->
**URL**: https://admanager.google.com/8663477#delivery/line_item/detail/line_item_id=6228226242&order_id=3159041016&li_tab=settings&sort_by=StartDateTime&sort_asc=false
**Browser / Version**: Firefox 110.0
**Operating System**: Mac OS X 10.15
**Tested Another Browser**: Yes Chrome
**Problem type**: Something else
**Description**: Only your browser is affeccting the PL line
**Steps to Reproduce**:
Whenever I save the PL in the line item level it goes back to zero. These are live campaigns and it only happened this week. These campaigns have been running all quarter. On the campaign level it shows the PL that I had put in the line, but at the line level it shows zero and when I change it to say 8 and hit save it will revert back to zero. It does not show this in Chrome.
<details>
<summary>View the screenshot</summary>
<img alt="Screenshot" src="https://webcompat.com/uploads/2023/3/15df21d7-4a5a-4d4b-817a-c6258881f8fc.jpg">
</details>
<details>
<summary>Browser Configuration</summary>
<ul>
<li>None</li>
</ul>
</details>
_From [webcompat.com](https://webcompat.com/) with ❤️_ | non_main | admanager google com see bug description url browser version firefox operating system mac os x tested another browser yes chrome problem type something else description only your browser is affeccting the pl line steps to reproduce whenever i save the pl in the line item level it goes back to zero these are live campaigns and it only happened this week these campaigns have been running all quarter on the campaign level it shows the pl that i had put in the line but at the line level it shows zero and when i change it to say and hit save it will revert back to zero it does not show this in chrome view the screenshot img alt screenshot src browser configuration none from with ❤️ | 0 |
78,994 | 10,097,778,064 | IssuesEvent | 2019-07-28 09:24:01 | kakawait/hugo-tranquilpeak-theme | https://api.github.com/repos/kakawait/hugo-tranquilpeak-theme | closed | ERROR function "hugo" not defined in "theme/partials/meta.html" | bug documentation | ### Configuration
- **Operating system with its version**: Fedora 29
- **Browser with its version**: N/A
- **Hugo version**: Hugo Static Site Generator v0.37.1
- **Tranquilpeak version**: 0.4.6-BETA
- **Do you reproduce on https://tranquilpeak.kakawait.com demo?**:
### Actual behaviour
Create a new page (or starting as local server) through me an error:
```
user@localhost:/home/user $ hugo new posts/my-first-post.md
ERROR 2019/07/24 17:01:17 Failed to add template "theme/partials/meta.html" in path "/home/user/Scripts/dev/web/hugo-test/themes/tranquilpeak/layouts/partials/meta.html": template: theme/partials/meta.html:4: function "hugo" not defined
ERROR 2019/07/24 17:01:17 theme/partials/meta.html : template: theme/partials/meta.html:4: function "hugo" not defined
```
### Expected behaviour
It didn't complain for `hugo` function.
### Steps to reproduce the behaviour
Use an relatively old hugo version (>0.30, ~<0.??), create a new site (`hugo new site test`), `git clone` the fresh version of this theme (0.4.6). From there simply run `hugo new posts/my-first-post.md`.
| 1.0 | ERROR function "hugo" not defined in "theme/partials/meta.html" - ### Configuration
- **Operating system with its version**: Fedora 29
- **Browser with its version**: N/A
- **Hugo version**: Hugo Static Site Generator v0.37.1
- **Tranquilpeak version**: 0.4.6-BETA
- **Do you reproduce on https://tranquilpeak.kakawait.com demo?**:
### Actual behaviour
Create a new page (or starting as local server) through me an error:
```
user@localhost:/home/user $ hugo new posts/my-first-post.md
ERROR 2019/07/24 17:01:17 Failed to add template "theme/partials/meta.html" in path "/home/user/Scripts/dev/web/hugo-test/themes/tranquilpeak/layouts/partials/meta.html": template: theme/partials/meta.html:4: function "hugo" not defined
ERROR 2019/07/24 17:01:17 theme/partials/meta.html : template: theme/partials/meta.html:4: function "hugo" not defined
```
### Expected behaviour
It didn't complain for `hugo` function.
### Steps to reproduce the behaviour
Use an relatively old hugo version (>0.30, ~<0.??), create a new site (`hugo new site test`), `git clone` the fresh version of this theme (0.4.6). From there simply run `hugo new posts/my-first-post.md`.
| non_main | error function hugo not defined in theme partials meta html configuration operating system with its version fedora browser with its version n a hugo version hugo static site generator tranquilpeak version beta do you reproduce on demo actual behaviour create a new page or starting as local server through me an error user localhost home user hugo new posts my first post md error failed to add template theme partials meta html in path home user scripts dev web hugo test themes tranquilpeak layouts partials meta html template theme partials meta html function hugo not defined error theme partials meta html template theme partials meta html function hugo not defined expected behaviour it didn t complain for hugo function steps to reproduce the behaviour use an relatively old hugo version create a new site hugo new site test git clone the fresh version of this theme from there simply run hugo new posts my first post md | 0 |
4,311 | 21,708,637,398 | IssuesEvent | 2022-05-10 12:02:17 | MarcusWolschon/osmeditor4android | https://api.github.com/repos/MarcusWolschon/osmeditor4android | closed | Improve test coverage | Medium Task Maintainability Work in progress | Contrary to what would be done today, when vespucci was created in 2009 no tests were created parallel to the rest of development. Moving to a conventional gradle build has made it much easier to write tests, however many of the geometry manipulation methods are not unit test friendly (do not directly return results) and test coverage is currently still low (16% when this issue was created).
As a general rule we should not be adding new functionality without tests and slowly work ourselves through the backlog as time permits.
~~Note these numbers do not contain coverage for the tile server for which, because it is run in a separate process, we currently can't dump coverage data.~~
Coverage 20170301 21%
Coverage 20170320 32%
Coverage 20170706 39%
Coverage 20171220 41%
Coverage 20180117 44%
Coverage 20180425 48%
Coverage 20180606 50%
Coverage 20190627 54%
Coverage 20190908 56%
Coverage 20200423 58%
Coverage 20200830 60%
Coverage 20201105 61.5%
Coverage 20201223 62%
Coverage 20210301 63.5%
Coverage 20210826 65.3%
Coverage 20211230 67.0%
Coverage 20220510 70.0%
Closing now, further coverage improvements can be followed on sonarcloud. | True | Improve test coverage - Contrary to what would be done today, when vespucci was created in 2009 no tests were created parallel to the rest of development. Moving to a conventional gradle build has made it much easier to write tests, however many of the geometry manipulation methods are not unit test friendly (do not directly return results) and test coverage is currently still low (16% when this issue was created).
As a general rule we should not be adding new functionality without tests and slowly work ourselves through the backlog as time permits.
~~Note these numbers do not contain coverage for the tile server for which, because it is run in a separate process, we currently can't dump coverage data.~~
Coverage 20170301 21%
Coverage 20170320 32%
Coverage 20170706 39%
Coverage 20171220 41%
Coverage 20180117 44%
Coverage 20180425 48%
Coverage 20180606 50%
Coverage 20190627 54%
Coverage 20190908 56%
Coverage 20200423 58%
Coverage 20200830 60%
Coverage 20201105 61.5%
Coverage 20201223 62%
Coverage 20210301 63.5%
Coverage 20210826 65.3%
Coverage 20211230 67.0%
Coverage 20220510 70.0%
Closing now, further coverage improvements can be followed on sonarcloud. | main | improve test coverage contrary to what would be done today when vespucci was created in no tests were created parallel to the rest of development moving to a conventional gradle build has made it much easier to write tests however many of the geometry manipulation methods are not unit test friendly do not directly return results and test coverage is currently still low when this issue was created as a general rule we should not be adding new functionality without tests and slowly work ourselves through the backlog as time permits note these numbers do not contain coverage for the tile server for which because it is run in a separate process we currently can t dump coverage data coverage coverage coverage coverage coverage coverage coverage coverage coverage coverage coverage coverage coverage coverage coverage coverage coverage closing now further coverage improvements can be followed on sonarcloud | 1 |
2,736 | 9,684,222,964 | IssuesEvent | 2019-05-23 13:20:19 | ipfs/package-managers | https://api.github.com/repos/ipfs/package-managers | closed | Cladistic tree of depths of integration | Audience: Package manager maintainers Type: Discussion topic | Preface: Understanding package managers -- let alone comprehensively enough to begin to describe the design spaces for growing them towards decentralizating and imagining the design space future ones -- is eternally tricky. I've been hoping to get some more stuff rolling in terms of either short checklists, or terminology that we might define to help shortcut design discussions to clarity faster, and so on. This is another attempt, and it certainly won't be an end-all, all-inclusive, etc; but it's a shot.
This is a quick, fairly high-level outline of tactical implementation choices that one could imagine making when designing a package manager with _some_ level of IPFS integration.
---
We could present this as a cladistic tree of tactical choices, one bool choice at a time:
- using IPFS: NO
- (effect: womp womp)
- using IPFS: YES
- aware of IPFS for content: NO
- (effect: this is implemented by doing crass bulk snapshotting of some static http filesystem. works, but bulky.)
- aware of IPFS for content: YES
- (effect: means we've got CIDs in the index metadata.)
- uses IPFS for index: NO
- (effect: well, okay, at least we provided a good content bucket!)
- (effect: presumably this means some centralized index service is in the mix. we don't have snapshotting over it. we're no closer to scoring nice properties like 'reproducible resolve'.)
- uses IPFS for index: YES
- (effect: pkgman clients can do full offline operation!)
- (effect: n.b., it's not clear at this stage whether we're fully decentralized: keep reading.)
- index is just bulk files: YES
- (effect: easiest to build this way. leaves a lot of work on the client. not necessarily very optimal -- need to get the full index as files even if you're only interested in a subset of it.)
- (effect: this does not get us any closer to 'pollination'-readiness or subset-of-index features -- effectively forces centralization for updating the index file...!!!)
- index is just bulk files: NO
- (effect: means we've got them to implement some index features in IPLD.)
- (effect: dedup for storage should be skyrocketing, since IPLD-native implementations naturally get granular copy-on-write goodness.)
- (effect: depends on how well it's done...! but this might get us towards subsets, pollination, and real decentralization.)
- TODO: EXPAND THIS. What choices are important to get us towards subsets, pollination, and real decentralization?
The "TODO" on the end is intentional :) and should be read as an open invitation for future thoughts. I suspect there's a lot of diverse choices possible here.
---
Another angle for looking at this is which broad categories of operation are part of the duties typically expected of the hosting side of a package manager:
- distributing (read-only) content
- distributing (read-only) "metadata"/"index" data -- mapping package names to content IDs, etc
- updating the "metadata"/"index" -- accepting new content, reconciling metadata changes, etc
This is a much more compact view of things, but interestingly, the cladistic tree above maps surprising closely to this: as the tree above gets deeper, we're basically moving from "content" to "distributing the index" to (at the end, in TODOspace of the cladistic tree) "distributed publishing".
---
These are just a couple of angles to look at things from. Are there better ways to lay this out? Can we resolve deeper into that tree to see what yet-more decentralized operations would look like? | True | Cladistic tree of depths of integration - Preface: Understanding package managers -- let alone comprehensively enough to begin to describe the design spaces for growing them towards decentralizating and imagining the design space future ones -- is eternally tricky. I've been hoping to get some more stuff rolling in terms of either short checklists, or terminology that we might define to help shortcut design discussions to clarity faster, and so on. This is another attempt, and it certainly won't be an end-all, all-inclusive, etc; but it's a shot.
This is a quick, fairly high-level outline of tactical implementation choices that one could imagine making when designing a package manager with _some_ level of IPFS integration.
---
We could present this as a cladistic tree of tactical choices, one bool choice at a time:
- using IPFS: NO
- (effect: womp womp)
- using IPFS: YES
- aware of IPFS for content: NO
- (effect: this is implemented by doing crass bulk snapshotting of some static http filesystem. works, but bulky.)
- aware of IPFS for content: YES
- (effect: means we've got CIDs in the index metadata.)
- uses IPFS for index: NO
- (effect: well, okay, at least we provided a good content bucket!)
- (effect: presumably this means some centralized index service is in the mix. we don't have snapshotting over it. we're no closer to scoring nice properties like 'reproducible resolve'.)
- uses IPFS for index: YES
- (effect: pkgman clients can do full offline operation!)
- (effect: n.b., it's not clear at this stage whether we're fully decentralized: keep reading.)
- index is just bulk files: YES
- (effect: easiest to build this way. leaves a lot of work on the client. not necessarily very optimal -- need to get the full index as files even if you're only interested in a subset of it.)
- (effect: this does not get us any closer to 'pollination'-readiness or subset-of-index features -- effectively forces centralization for updating the index file...!!!)
- index is just bulk files: NO
- (effect: means we've got them to implement some index features in IPLD.)
- (effect: dedup for storage should be skyrocketing, since IPLD-native implementations naturally get granular copy-on-write goodness.)
- (effect: depends on how well it's done...! but this might get us towards subsets, pollination, and real decentralization.)
- TODO: EXPAND THIS. What choices are important to get us towards subsets, pollination, and real decentralization?
The "TODO" on the end is intentional :) and should be read as an open invitation for future thoughts. I suspect there's a lot of diverse choices possible here.
---
Another angle for looking at this is which broad categories of operation are part of the duties typically expected of the hosting side of a package manager:
- distributing (read-only) content
- distributing (read-only) "metadata"/"index" data -- mapping package names to content IDs, etc
- updating the "metadata"/"index" -- accepting new content, reconciling metadata changes, etc
This is a much more compact view of things, but interestingly, the cladistic tree above maps surprising closely to this: as the tree above gets deeper, we're basically moving from "content" to "distributing the index" to (at the end, in TODOspace of the cladistic tree) "distributed publishing".
---
These are just a couple of angles to look at things from. Are there better ways to lay this out? Can we resolve deeper into that tree to see what yet-more decentralized operations would look like? | main | cladistic tree of depths of integration preface understanding package managers let alone comprehensively enough to begin to describe the design spaces for growing them towards decentralizating and imagining the design space future ones is eternally tricky i ve been hoping to get some more stuff rolling in terms of either short checklists or terminology that we might define to help shortcut design discussions to clarity faster and so on this is another attempt and it certainly won t be an end all all inclusive etc but it s a shot this is a quick fairly high level outline of tactical implementation choices that one could imagine making when designing a package manager with some level of ipfs integration we could present this as a cladistic tree of tactical choices one bool choice at a time using ipfs no effect womp womp using ipfs yes aware of ipfs for content no effect this is implemented by doing crass bulk snapshotting of some static http filesystem works but bulky aware of ipfs for content yes effect means we ve got cids in the index metadata uses ipfs for index no effect well okay at least we provided a good content bucket effect presumably this means some centralized index service is in the mix we don t have snapshotting over it we re no closer to scoring nice properties like reproducible resolve uses ipfs for index yes effect pkgman clients can do full offline operation effect n b it s not clear at this stage whether we re fully decentralized keep reading index is just bulk files yes effect easiest to build this way leaves a lot of work on the client not necessarily very optimal need to get the full index as files even if you re only interested in a subset of it effect this does not get us any closer to pollination readiness or subset of index features effectively forces centralization for updating the index file index is just bulk files no effect means we ve got them to implement some index features in ipld effect dedup for storage should be skyrocketing since ipld native implementations naturally get granular copy on write goodness effect depends on how well it s done but this might get us towards subsets pollination and real decentralization todo expand this what choices are important to get us towards subsets pollination and real decentralization the todo on the end is intentional and should be read as an open invitation for future thoughts i suspect there s a lot of diverse choices possible here another angle for looking at this is which broad categories of operation are part of the duties typically expected of the hosting side of a package manager distributing read only content distributing read only metadata index data mapping package names to content ids etc updating the metadata index accepting new content reconciling metadata changes etc this is a much more compact view of things but interestingly the cladistic tree above maps surprising closely to this as the tree above gets deeper we re basically moving from content to distributing the index to at the end in todospace of the cladistic tree distributed publishing these are just a couple of angles to look at things from are there better ways to lay this out can we resolve deeper into that tree to see what yet more decentralized operations would look like | 1 |
1,323 | 5,672,330,721 | IssuesEvent | 2017-04-12 00:57:05 | duckduckgo/zeroclickinfo-spice | https://api.github.com/repos/duckduckgo/zeroclickinfo-spice | closed | Public Holidays: under-triggering? | Maintainer Input Requested | Should this IA cover queries such as "when is labor day 2016?"
---
IA Page: http://duck.co/ia/view/public_holidays
[Maintainer](http://docs.duckduckhack.com/maintaining/guidelines.html): @sekhavati
| True | Public Holidays: under-triggering? - Should this IA cover queries such as "when is labor day 2016?"
---
IA Page: http://duck.co/ia/view/public_holidays
[Maintainer](http://docs.duckduckhack.com/maintaining/guidelines.html): @sekhavati
| main | public holidays under triggering should this ia cover queries such as when is labor day ia page sekhavati | 1 |
360,311 | 10,686,912,332 | IssuesEvent | 2019-10-22 15:12:20 | tunnckoCore/opensource | https://api.github.com/repos/tunnckoCore/opensource | closed | do not load filesystem cache in rollup runner, if not dist found | Pkg: jest-runners Priority: High Status: In Progress Type: Bug | cuz gives us false positives
also, do not load cache if config is changed. | 1.0 | do not load filesystem cache in rollup runner, if not dist found - cuz gives us false positives
also, do not load cache if config is changed. | non_main | do not load filesystem cache in rollup runner if not dist found cuz gives us false positives also do not load cache if config is changed | 0 |
5,440 | 27,246,482,324 | IssuesEvent | 2023-02-22 02:42:34 | VA-Explorer/va_explorer | https://api.github.com/repos/VA-Explorer/va_explorer | opened | Add ability to query by interviewer to export | Type: Maintainance | **What is the expected state?**
As a user of the export feature I'd like to download all VAs conducted by <interviewer>
**What is the actual state?**
I am not able to query by interviewer.
**Relevant context**
Ensure security permissions are honored is certain users are not able to view interviewer name or PII
| True | Add ability to query by interviewer to export - **What is the expected state?**
As a user of the export feature I'd like to download all VAs conducted by <interviewer>
**What is the actual state?**
I am not able to query by interviewer.
**Relevant context**
Ensure security permissions are honored is certain users are not able to view interviewer name or PII
| main | add ability to query by interviewer to export what is the expected state as a user of the export feature i d like to download all vas conducted by what is the actual state i am not able to query by interviewer relevant context ensure security permissions are honored is certain users are not able to view interviewer name or pii | 1 |
5,695 | 30,002,558,502 | IssuesEvent | 2023-06-26 10:14:47 | precice/precice | https://api.github.com/repos/precice/precice | opened | Don't allow duplicate coupling schemes in compositional coupling scheme | usability maintainability breaking change | The following compositional coupling scheme is currently allowed:
```
<coupling-scheme:parallel-implicit>
<participants first="SolverOne" second="SolverTwo" />
...
</coupling-scheme:parallel-implicit>
<coupling-scheme:parallel-explicit>
<participants first="SolverOne" second="SolverThree" />
...
</coupling-scheme:parallel-explicit>
<coupling-scheme:parallel-explicit>
<participants first="SolverOne" second="SolverThree" />
...
</coupling-scheme:parallel-explicit>
</precice-configuration>
```
I think one of the two parallel explicit schemes is not needed and we should raise an error. We could easily check for duplicates of coupling schemes between two identical solvers.
Again: If we reduce the number of legal configurations, this will reduce the number of required tests, simplify implementation and make preCICE more user-friendly. | True | Don't allow duplicate coupling schemes in compositional coupling scheme - The following compositional coupling scheme is currently allowed:
```
<coupling-scheme:parallel-implicit>
<participants first="SolverOne" second="SolverTwo" />
...
</coupling-scheme:parallel-implicit>
<coupling-scheme:parallel-explicit>
<participants first="SolverOne" second="SolverThree" />
...
</coupling-scheme:parallel-explicit>
<coupling-scheme:parallel-explicit>
<participants first="SolverOne" second="SolverThree" />
...
</coupling-scheme:parallel-explicit>
</precice-configuration>
```
I think one of the two parallel explicit schemes is not needed and we should raise an error. We could easily check for duplicates of coupling schemes between two identical solvers.
Again: If we reduce the number of legal configurations, this will reduce the number of required tests, simplify implementation and make preCICE more user-friendly. | main | don t allow duplicate coupling schemes in compositional coupling scheme the following compositional coupling scheme is currently allowed i think one of the two parallel explicit schemes is not needed and we should raise an error we could easily check for duplicates of coupling schemes between two identical solvers again if we reduce the number of legal configurations this will reduce the number of required tests simplify implementation and make precice more user friendly | 1 |
8 | 2,514,802,660 | IssuesEvent | 2015-01-15 14:35:47 | simplesamlphp/simplesamlphp | https://api.github.com/repos/simplesamlphp/simplesamlphp | closed | Eliminate need for shipped jQuery UI 1.5 | enhancement maintainability | simpleSAMLphp currently ships two versions of jQuery and jQuery UI under `www/resources/`; a set corresponding to jQuery UI 1.5 and one corresponding to 1.6.
All code now uses 1.6 except for `logout-iframe.php`. If `logout-iframe.php` could be updated for 1.6, the 1.5 files could be removed. | True | Eliminate need for shipped jQuery UI 1.5 - simpleSAMLphp currently ships two versions of jQuery and jQuery UI under `www/resources/`; a set corresponding to jQuery UI 1.5 and one corresponding to 1.6.
All code now uses 1.6 except for `logout-iframe.php`. If `logout-iframe.php` could be updated for 1.6, the 1.5 files could be removed. | main | eliminate need for shipped jquery ui simplesamlphp currently ships two versions of jquery and jquery ui under www resources a set corresponding to jquery ui and one corresponding to all code now uses except for logout iframe php if logout iframe php could be updated for the files could be removed | 1 |
110,411 | 16,979,891,283 | IssuesEvent | 2021-06-30 07:27:55 | SmartBear/ready-mqtt-plugin | https://api.github.com/repos/SmartBear/ready-mqtt-plugin | closed | CVE-2019-17638 (High) detected in jetty-server-9.4.29.v20200521.jar - autoclosed | security vulnerability | ## CVE-2019-17638 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jetty-server-9.4.29.v20200521.jar</b></p></summary>
<p>The core jetty server artifact.</p>
<p>Library home page: <a href="http://www.eclipse.org/jetty">http://www.eclipse.org/jetty</a></p>
<p>Path to dependency file: ready-mqtt-plugin/pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/org/eclipse/jetty/jetty-server/9.4.29.v20200521/jetty-server-9.4.29.v20200521.jar</p>
<p>
Dependency Hierarchy:
- ready-api-soapui-pro-3.3.1.jar (Root Library)
- ready-api-soapui-3.3.1.jar
- :x: **jetty-server-9.4.29.v20200521.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/SmartBear/ready-mqtt-plugin/commit/72456065a443f2258660fde64bebd87fcbc170bb">72456065a443f2258660fde64bebd87fcbc170bb</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
In Eclipse Jetty, versions 9.4.27.v20200227 to 9.4.29.v20200521, in case of too large response headers, Jetty throws an exception to produce an HTTP 431 error. When this happens, the ByteBuffer containing the HTTP response headers is released back to the ByteBufferPool twice. Because of this double release, two threads can acquire the same ByteBuffer from the pool and while thread1 is about to use the ByteBuffer to write response1 data, thread2 fills the ByteBuffer with other data. Thread1 then proceeds to write the buffer that now contains different data. This results in client1, which issued request1 seeing data from another request or response which could contain sensitive data belonging to client2 (HTTP session ids, authentication credentials, etc.). If the Jetty version cannot be upgraded, the vulnerability can be significantly reduced by configuring a responseHeaderSize significantly larger than the requestHeaderSize (12KB responseHeaderSize and 8KB requestHeaderSize).
<p>Publish Date: 2020-07-09
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-17638>CVE-2019-17638</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.4</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: Low
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://bugs.eclipse.org/bugs/show_bug.cgi?id=564984">https://bugs.eclipse.org/bugs/show_bug.cgi?id=564984</a></p>
<p>Release Date: 2020-07-09</p>
<p>Fix Resolution: org.eclipse.jetty:jetty-server:9.4.30.v20200611;org.eclipse.jetty:jetty-runner:9.4.30.v20200611</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"org.eclipse.jetty","packageName":"jetty-server","packageVersion":"9.4.29.v20200521","packageFilePaths":["/pom.xml"],"isTransitiveDependency":true,"dependencyTree":"com.smartbear:ready-api-soapui-pro:3.3.1;com.smartbear:ready-api-soapui:3.3.1;org.eclipse.jetty:jetty-server:9.4.29.v20200521","isMinimumFixVersionAvailable":true,"minimumFixVersion":"org.eclipse.jetty:jetty-server:9.4.30.v20200611;org.eclipse.jetty:jetty-runner:9.4.30.v20200611"}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2019-17638","vulnerabilityDetails":"In Eclipse Jetty, versions 9.4.27.v20200227 to 9.4.29.v20200521, in case of too large response headers, Jetty throws an exception to produce an HTTP 431 error. When this happens, the ByteBuffer containing the HTTP response headers is released back to the ByteBufferPool twice. Because of this double release, two threads can acquire the same ByteBuffer from the pool and while thread1 is about to use the ByteBuffer to write response1 data, thread2 fills the ByteBuffer with other data. Thread1 then proceeds to write the buffer that now contains different data. This results in client1, which issued request1 seeing data from another request or response which could contain sensitive data belonging to client2 (HTTP session ids, authentication credentials, etc.). If the Jetty version cannot be upgraded, the vulnerability can be significantly reduced by configuring a responseHeaderSize significantly larger than the requestHeaderSize (12KB responseHeaderSize and 8KB requestHeaderSize).","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-17638","cvss3Severity":"high","cvss3Score":"9.4","cvss3Metrics":{"A":"Low","AC":"Low","PR":"None","S":"Unchanged","C":"High","UI":"None","AV":"Network","I":"High"},"extraData":{}}</REMEDIATE> --> | True | CVE-2019-17638 (High) detected in jetty-server-9.4.29.v20200521.jar - autoclosed - ## CVE-2019-17638 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jetty-server-9.4.29.v20200521.jar</b></p></summary>
<p>The core jetty server artifact.</p>
<p>Library home page: <a href="http://www.eclipse.org/jetty">http://www.eclipse.org/jetty</a></p>
<p>Path to dependency file: ready-mqtt-plugin/pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/org/eclipse/jetty/jetty-server/9.4.29.v20200521/jetty-server-9.4.29.v20200521.jar</p>
<p>
Dependency Hierarchy:
- ready-api-soapui-pro-3.3.1.jar (Root Library)
- ready-api-soapui-3.3.1.jar
- :x: **jetty-server-9.4.29.v20200521.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/SmartBear/ready-mqtt-plugin/commit/72456065a443f2258660fde64bebd87fcbc170bb">72456065a443f2258660fde64bebd87fcbc170bb</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
In Eclipse Jetty, versions 9.4.27.v20200227 to 9.4.29.v20200521, in case of too large response headers, Jetty throws an exception to produce an HTTP 431 error. When this happens, the ByteBuffer containing the HTTP response headers is released back to the ByteBufferPool twice. Because of this double release, two threads can acquire the same ByteBuffer from the pool and while thread1 is about to use the ByteBuffer to write response1 data, thread2 fills the ByteBuffer with other data. Thread1 then proceeds to write the buffer that now contains different data. This results in client1, which issued request1 seeing data from another request or response which could contain sensitive data belonging to client2 (HTTP session ids, authentication credentials, etc.). If the Jetty version cannot be upgraded, the vulnerability can be significantly reduced by configuring a responseHeaderSize significantly larger than the requestHeaderSize (12KB responseHeaderSize and 8KB requestHeaderSize).
<p>Publish Date: 2020-07-09
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-17638>CVE-2019-17638</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.4</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: Low
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://bugs.eclipse.org/bugs/show_bug.cgi?id=564984">https://bugs.eclipse.org/bugs/show_bug.cgi?id=564984</a></p>
<p>Release Date: 2020-07-09</p>
<p>Fix Resolution: org.eclipse.jetty:jetty-server:9.4.30.v20200611;org.eclipse.jetty:jetty-runner:9.4.30.v20200611</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"org.eclipse.jetty","packageName":"jetty-server","packageVersion":"9.4.29.v20200521","packageFilePaths":["/pom.xml"],"isTransitiveDependency":true,"dependencyTree":"com.smartbear:ready-api-soapui-pro:3.3.1;com.smartbear:ready-api-soapui:3.3.1;org.eclipse.jetty:jetty-server:9.4.29.v20200521","isMinimumFixVersionAvailable":true,"minimumFixVersion":"org.eclipse.jetty:jetty-server:9.4.30.v20200611;org.eclipse.jetty:jetty-runner:9.4.30.v20200611"}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2019-17638","vulnerabilityDetails":"In Eclipse Jetty, versions 9.4.27.v20200227 to 9.4.29.v20200521, in case of too large response headers, Jetty throws an exception to produce an HTTP 431 error. When this happens, the ByteBuffer containing the HTTP response headers is released back to the ByteBufferPool twice. Because of this double release, two threads can acquire the same ByteBuffer from the pool and while thread1 is about to use the ByteBuffer to write response1 data, thread2 fills the ByteBuffer with other data. Thread1 then proceeds to write the buffer that now contains different data. This results in client1, which issued request1 seeing data from another request or response which could contain sensitive data belonging to client2 (HTTP session ids, authentication credentials, etc.). If the Jetty version cannot be upgraded, the vulnerability can be significantly reduced by configuring a responseHeaderSize significantly larger than the requestHeaderSize (12KB responseHeaderSize and 8KB requestHeaderSize).","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-17638","cvss3Severity":"high","cvss3Score":"9.4","cvss3Metrics":{"A":"Low","AC":"Low","PR":"None","S":"Unchanged","C":"High","UI":"None","AV":"Network","I":"High"},"extraData":{}}</REMEDIATE> --> | non_main | cve high detected in jetty server jar autoclosed cve high severity vulnerability vulnerable library jetty server jar the core jetty server artifact library home page a href path to dependency file ready mqtt plugin pom xml path to vulnerable library home wss scanner repository org eclipse jetty jetty server jetty server jar dependency hierarchy ready api soapui pro jar root library ready api soapui jar x jetty server jar vulnerable library found in head commit a href found in base branch master vulnerability details in eclipse jetty versions to in case of too large response headers jetty throws an exception to produce an http error when this happens the bytebuffer containing the http response headers is released back to the bytebufferpool twice because of this double release two threads can acquire the same bytebuffer from the pool and while is about to use the bytebuffer to write data fills the bytebuffer with other data then proceeds to write the buffer that now contains different data this results in which issued seeing data from another request or response which could contain sensitive data belonging to http session ids authentication credentials etc if the jetty version cannot be upgraded the vulnerability can be significantly reduced by configuring a responseheadersize significantly larger than the requestheadersize responseheadersize and requestheadersize publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact low for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution org eclipse jetty jetty server org eclipse jetty jetty runner isopenpronvulnerability false ispackagebased true isdefaultbranch true packages istransitivedependency true dependencytree com smartbear ready api soapui pro com smartbear ready api soapui org eclipse jetty jetty server isminimumfixversionavailable true minimumfixversion org eclipse jetty jetty server org eclipse jetty jetty runner basebranches vulnerabilityidentifier cve vulnerabilitydetails in eclipse jetty versions to in case of too large response headers jetty throws an exception to produce an http error when this happens the bytebuffer containing the http response headers is released back to the bytebufferpool twice because of this double release two threads can acquire the same bytebuffer from the pool and while is about to use the bytebuffer to write data fills the bytebuffer with other data then proceeds to write the buffer that now contains different data this results in which issued seeing data from another request or response which could contain sensitive data belonging to http session ids authentication credentials etc if the jetty version cannot be upgraded the vulnerability can be significantly reduced by configuring a responseheadersize significantly larger than the requestheadersize responseheadersize and requestheadersize vulnerabilityurl | 0 |
2,867 | 10,272,902,594 | IssuesEvent | 2019-08-23 17:45:33 | arcticicestudio/arctic | https://api.github.com/repos/arcticicestudio/arctic | closed | EditorConfig | context-workflow scope-dx scope-maintainability type-feature | <p align="center"><img src="https://user-images.githubusercontent.com/7836623/63612291-37239080-c5de-11e9-9b28-f15465083101.png" /></p>
<!-- Source: https://editorconfig.org/logo.png -->
Add the [EditorConfig][] file to define and maintain consistent coding styles between different editors and IDEs.
[editorconfig]: https://editorconfig.org | True | EditorConfig - <p align="center"><img src="https://user-images.githubusercontent.com/7836623/63612291-37239080-c5de-11e9-9b28-f15465083101.png" /></p>
<!-- Source: https://editorconfig.org/logo.png -->
Add the [EditorConfig][] file to define and maintain consistent coding styles between different editors and IDEs.
[editorconfig]: https://editorconfig.org | main | editorconfig add the file to define and maintain consistent coding styles between different editors and ides | 1 |
1,271 | 5,384,148,505 | IssuesEvent | 2017-02-24 09:37:51 | opencaching/opencaching-pl | https://api.github.com/repos/opencaching/opencaching-pl | opened | cache_type data definition and properties | Component_Cache Component_CacheEdit Component_Configs Component_Cron Component_DataBase General_Discussion Server_DB_Contents Type_Enhancement x_Maintainability | Cache type defines the kind of caches that are hideable. Each cache type is identified by a unique ID.
Properties:
- ID
- name (for referencing translations)
- base image icon
- container flag (does it have a physical container?)
- location flag (is it at given coordinates? eg puzzle=false)
- limit (1 for own cache)
| True | cache_type data definition and properties - Cache type defines the kind of caches that are hideable. Each cache type is identified by a unique ID.
Properties:
- ID
- name (for referencing translations)
- base image icon
- container flag (does it have a physical container?)
- location flag (is it at given coordinates? eg puzzle=false)
- limit (1 for own cache)
| main | cache type data definition and properties cache type defines the kind of caches that are hideable each cache type is identified by a unique id properties id name for referencing translations base image icon container flag does it have a physical container location flag is it at given coordinates eg puzzle false limit for own cache | 1 |
3,693 | 15,085,368,514 | IssuesEvent | 2021-02-05 18:33:39 | carbon-design-system/carbon | https://api.github.com/repos/carbon-design-system/carbon | closed | Loading indicator svg for batch actions | status: waiting for author's response 💬 status: waiting for maintainer response 💬 type: enhancement 💡 | ### Summary
Batch actions in the grid often take a few seconds. I'd like a good way to indicate this. If a put a Loading indicator above or below the grid, there is bounce in the UI when the action completes. Instead, I'd like to show the loading indicator in the batch action toolbar.
I tried doing this and it basically does what I want, but the svg doesn't render nicely on the blue background.

| True | Loading indicator svg for batch actions - ### Summary
Batch actions in the grid often take a few seconds. I'd like a good way to indicate this. If a put a Loading indicator above or below the grid, there is bounce in the UI when the action completes. Instead, I'd like to show the loading indicator in the batch action toolbar.
I tried doing this and it basically does what I want, but the svg doesn't render nicely on the blue background.

| main | loading indicator svg for batch actions summary batch actions in the grid often take a few seconds i d like a good way to indicate this if a put a loading indicator above or below the grid there is bounce in the ui when the action completes instead i d like to show the loading indicator in the batch action toolbar i tried doing this and it basically does what i want but the svg doesn t render nicely on the blue background | 1 |
3,345 | 12,968,282,026 | IssuesEvent | 2020-07-21 05:23:00 | diofant/diofant | https://api.github.com/repos/diofant/diofant | opened | Remove Poly/PurePoly classes? | maintainability needs decision polys | It might worth to keep only the functional interface for polynomial manipulation, e.g. `resultant()` function and so on. Internally, such functions will use the PolyElement class (public interface; rename to Poly?) to represent polynomials. See [this](https://github.com/sympy/sympy/pull/7236#issuecomment-37554545) comment and following discussion.
See also https://github.com/sympy/sympy/issues/18672 (Poly plays badly with func/args invariant). | True | Remove Poly/PurePoly classes? - It might worth to keep only the functional interface for polynomial manipulation, e.g. `resultant()` function and so on. Internally, such functions will use the PolyElement class (public interface; rename to Poly?) to represent polynomials. See [this](https://github.com/sympy/sympy/pull/7236#issuecomment-37554545) comment and following discussion.
See also https://github.com/sympy/sympy/issues/18672 (Poly plays badly with func/args invariant). | main | remove poly purepoly classes it might worth to keep only the functional interface for polynomial manipulation e g resultant function and so on internally such functions will use the polyelement class public interface rename to poly to represent polynomials see comment and following discussion see also poly plays badly with func args invariant | 1 |
2,547 | 8,675,068,872 | IssuesEvent | 2018-11-30 09:48:02 | citrusframework/citrus | https://api.github.com/repos/citrusframework/citrus | opened | Breaking change in waitFor().message() | Prio: High READY Type: Maintainance | **Citrus Version**
>= 2.7.7
**Description**
If you upgrade your Citrus version to 2.7.7 or higher, we've a breaking change in the http wait builder API. We'll correct this with one of the future releases to ensure effortless version upgrades
**API before change**
```java
waitFor().message("message");
```
**API after change**
```java
waitFor().message().name("message");
```
**Additional information**
* Issue:#417
* Commit: https://github.com/citrusframework/citrus/commit/515e840f9133383d19304916db197ce5fdb9ac83#diff-f106d4946b18253678933a5267aa2540R82
BR,
Sven | True | Breaking change in waitFor().message() - **Citrus Version**
>= 2.7.7
**Description**
If you upgrade your Citrus version to 2.7.7 or higher, we've a breaking change in the http wait builder API. We'll correct this with one of the future releases to ensure effortless version upgrades
**API before change**
```java
waitFor().message("message");
```
**API after change**
```java
waitFor().message().name("message");
```
**Additional information**
* Issue:#417
* Commit: https://github.com/citrusframework/citrus/commit/515e840f9133383d19304916db197ce5fdb9ac83#diff-f106d4946b18253678933a5267aa2540R82
BR,
Sven | main | breaking change in waitfor message citrus version description if you upgrade your citrus version to or higher we ve a breaking change in the http wait builder api we ll correct this with one of the future releases to ensure effortless version upgrades api before change java waitfor message message api after change java waitfor message name message additional information issue commit br sven | 1 |
2,117 | 7,199,797,277 | IssuesEvent | 2018-02-05 16:57:54 | RalfKoban/MiKo-Analyzers | https://api.github.com/repos/RalfKoban/MiKo-Analyzers | opened | Methods should have max. 5 parameters | Area: analyzer Area: maintainability feature | Methods (and ctors) should have a max. limit of 5 parameters. All above are too many parameters and either need to be consolidated or reduced. | True | Methods should have max. 5 parameters - Methods (and ctors) should have a max. limit of 5 parameters. All above are too many parameters and either need to be consolidated or reduced. | main | methods should have max parameters methods and ctors should have a max limit of parameters all above are too many parameters and either need to be consolidated or reduced | 1 |
754,304 | 26,380,798,808 | IssuesEvent | 2023-01-12 08:28:41 | gamefreedomgit/Maelstrom | https://api.github.com/repos/gamefreedomgit/Maelstrom | closed | [Moved from Discord][Quest: Recruitment][Zone:Tirisfal Glades] (Undead starting quest 4) - Corpses scripting error | NPC Quest - Cataclysm (1-60) Pathfinding Priority: Low Status: Confirmed Bug Report from Discord | might not be that big of an issue but just seems strange to just see the corpses only stand there rather then go on his back.
https://cata-twinhead.twinstar.cz/?quest=26800

| 1.0 | [Moved from Discord][Quest: Recruitment][Zone:Tirisfal Glades] (Undead starting quest 4) - Corpses scripting error - might not be that big of an issue but just seems strange to just see the corpses only stand there rather then go on his back.
https://cata-twinhead.twinstar.cz/?quest=26800

| non_main | undead starting quest corpses scripting error might not be that big of an issue but just seems strange to just see the corpses only stand there rather then go on his back | 0 |
2,546 | 8,675,034,018 | IssuesEvent | 2018-11-30 09:41:59 | citrusframework/citrus | https://api.github.com/repos/citrusframework/citrus | opened | WaitBuilder.condition() unsolvable breaking change | Prio: High State: To discuss Type: Maintainance | **Citrus Version**
>= 2.7.7
**Description**
If you upgrade your Citrus version to 2.7.7 or higher, we've a breaking change on the `WaitBuilder.condition` method which is affecting the return type of the method.
From my perspective, this is not fixable without introducing another breaking change.
**API before change**
```java
/**
* Condition to wait for during execution.
* @param condition
* @return
*/
public WaitConditionBuilder condition(Condition condition)
```
**API after change**
```java
/**
* Condition to wait for during execution.
* @param condition
* @return
*/
public Wait condition(Condition condition)
```
**Additional information**
Issue: #417
Commit: https://github.com/citrusframework/citrus/commit/515e840f9133383d19304916db197ce5fdb9ac83#diff-fc38b7c87355bad09a45d5e7589fd83eR47
BR,
Sven | True | WaitBuilder.condition() unsolvable breaking change - **Citrus Version**
>= 2.7.7
**Description**
If you upgrade your Citrus version to 2.7.7 or higher, we've a breaking change on the `WaitBuilder.condition` method which is affecting the return type of the method.
From my perspective, this is not fixable without introducing another breaking change.
**API before change**
```java
/**
* Condition to wait for during execution.
* @param condition
* @return
*/
public WaitConditionBuilder condition(Condition condition)
```
**API after change**
```java
/**
* Condition to wait for during execution.
* @param condition
* @return
*/
public Wait condition(Condition condition)
```
**Additional information**
Issue: #417
Commit: https://github.com/citrusframework/citrus/commit/515e840f9133383d19304916db197ce5fdb9ac83#diff-fc38b7c87355bad09a45d5e7589fd83eR47
BR,
Sven | main | waitbuilder condition unsolvable breaking change citrus version description if you upgrade your citrus version to or higher we ve a breaking change on the waitbuilder condition method which is affecting the return type of the method from my perspective this is not fixable without introducing another breaking change api before change java condition to wait for during execution param condition return public waitconditionbuilder condition condition condition api after change java condition to wait for during execution param condition return public wait condition condition condition additional information issue commit br sven | 1 |
4,473 | 2,726,469,411 | IssuesEvent | 2015-04-15 10:37:26 | DynareTeam/dynare | https://api.github.com/repos/DynareTeam/dynare | opened | Fix bug with tests/gsa/ls2003a.mod | bug testsuite | which causes a segmentation fault. See discussion in #880 and commit a7f0366980540d668a4b54bb8517798e1ae4deac. | 1.0 | Fix bug with tests/gsa/ls2003a.mod - which causes a segmentation fault. See discussion in #880 and commit a7f0366980540d668a4b54bb8517798e1ae4deac. | non_main | fix bug with tests gsa mod which causes a segmentation fault see discussion in and commit | 0 |
179,738 | 21,580,315,820 | IssuesEvent | 2022-05-02 17:59:30 | vincenzodistasio97/excel-to-json | https://api.github.com/repos/vincenzodistasio97/excel-to-json | opened | CVE-2018-13797 (High) detected in macaddress-0.2.8.tgz | security vulnerability | ## CVE-2018-13797 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>macaddress-0.2.8.tgz</b></p></summary>
<p>Get the MAC addresses (hardware addresses) of the hosts network interfaces.</p>
<p>Library home page: <a href="https://registry.npmjs.org/macaddress/-/macaddress-0.2.8.tgz">https://registry.npmjs.org/macaddress/-/macaddress-0.2.8.tgz</a></p>
<p>Path to dependency file: /client/package.json</p>
<p>Path to vulnerable library: /client/node_modules/macaddress/package.json</p>
<p>
Dependency Hierarchy:
- react-scripts-1.1.1.tgz (Root Library)
- css-loader-0.28.7.tgz
- cssnano-3.10.0.tgz
- postcss-filter-plugins-2.0.2.tgz
- uniqid-4.1.1.tgz
- :x: **macaddress-0.2.8.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/vincenzodistasio97/excel-to-json/commit/e367d4db4134dc676344b2b9fb2443300bd3c9c7">e367d4db4134dc676344b2b9fb2443300bd3c9c7</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The macaddress module before 0.2.9 for Node.js is prone to an arbitrary command injection flaw, due to allowing unsanitized input to an exec (rather than execFile) call.
<p>Publish Date: 2018-07-10
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2018-13797>CVE-2018-13797</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2018-13797">https://nvd.nist.gov/vuln/detail/CVE-2018-13797</a></p>
<p>Release Date: 2018-07-10</p>
<p>Fix Resolution (macaddress): 0.2.9</p>
<p>Direct dependency fix Resolution (react-scripts): 1.1.2</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2018-13797 (High) detected in macaddress-0.2.8.tgz - ## CVE-2018-13797 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>macaddress-0.2.8.tgz</b></p></summary>
<p>Get the MAC addresses (hardware addresses) of the hosts network interfaces.</p>
<p>Library home page: <a href="https://registry.npmjs.org/macaddress/-/macaddress-0.2.8.tgz">https://registry.npmjs.org/macaddress/-/macaddress-0.2.8.tgz</a></p>
<p>Path to dependency file: /client/package.json</p>
<p>Path to vulnerable library: /client/node_modules/macaddress/package.json</p>
<p>
Dependency Hierarchy:
- react-scripts-1.1.1.tgz (Root Library)
- css-loader-0.28.7.tgz
- cssnano-3.10.0.tgz
- postcss-filter-plugins-2.0.2.tgz
- uniqid-4.1.1.tgz
- :x: **macaddress-0.2.8.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/vincenzodistasio97/excel-to-json/commit/e367d4db4134dc676344b2b9fb2443300bd3c9c7">e367d4db4134dc676344b2b9fb2443300bd3c9c7</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The macaddress module before 0.2.9 for Node.js is prone to an arbitrary command injection flaw, due to allowing unsanitized input to an exec (rather than execFile) call.
<p>Publish Date: 2018-07-10
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2018-13797>CVE-2018-13797</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2018-13797">https://nvd.nist.gov/vuln/detail/CVE-2018-13797</a></p>
<p>Release Date: 2018-07-10</p>
<p>Fix Resolution (macaddress): 0.2.9</p>
<p>Direct dependency fix Resolution (react-scripts): 1.1.2</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_main | cve high detected in macaddress tgz cve high severity vulnerability vulnerable library macaddress tgz get the mac addresses hardware addresses of the hosts network interfaces library home page a href path to dependency file client package json path to vulnerable library client node modules macaddress package json dependency hierarchy react scripts tgz root library css loader tgz cssnano tgz postcss filter plugins tgz uniqid tgz x macaddress tgz vulnerable library found in head commit a href found in base branch master vulnerability details the macaddress module before for node js is prone to an arbitrary command injection flaw due to allowing unsanitized input to an exec rather than execfile call publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution macaddress direct dependency fix resolution react scripts step up your open source security game with whitesource | 0 |
263,300 | 19,906,585,859 | IssuesEvent | 2022-01-25 13:25:41 | mainflux/mainflux | https://api.github.com/repos/mainflux/mainflux | closed | Add instructions for CoAP CLI | documentation good first issue | **ENHANCEMENT**
1. Describe the enhancement you are requesting.
Currently our documentation recommends using [Copper for CoAP testing](https://mainflux.readthedocs.io/en/latest/messaging/#coap). Add the information about [coap-cli](https://github.com/mainflux/coap-cli) and give the usage example.
2. Indicate the importance of this enhancement to you (must-have, should-have, nice-to-have).
Must have | 1.0 | Add instructions for CoAP CLI - **ENHANCEMENT**
1. Describe the enhancement you are requesting.
Currently our documentation recommends using [Copper for CoAP testing](https://mainflux.readthedocs.io/en/latest/messaging/#coap). Add the information about [coap-cli](https://github.com/mainflux/coap-cli) and give the usage example.
2. Indicate the importance of this enhancement to you (must-have, should-have, nice-to-have).
Must have | non_main | add instructions for coap cli enhancement describe the enhancement you are requesting currently our documentation recommends using add the information about and give the usage example indicate the importance of this enhancement to you must have should have nice to have must have | 0 |
3,692 | 15,084,811,965 | IssuesEvent | 2021-02-05 17:42:24 | carbon-design-system/carbon | https://api.github.com/repos/carbon-design-system/carbon | closed | Filterable Multiselect focus outline on wrong node | status: waiting for maintainer response 💬 type: a11y ♿ | When a filterable multiselect has some options selected, clicking on it's `<input>` puts focus outline on wrong node:
<img width="321" alt="Screen Shot 2021-01-12 at 22 05 18" src="https://user-images.githubusercontent.com/69599/104319050-a9727900-5523-11eb-9ff0-893e849bf8d7.png">
## Environment
> Operating system
MacOS
> Browser
Chrome
## Detailed description
> What version of the Carbon Design System are you using?
10.26
> What did you expect to happen?
Focus outline around whole control.
> What happened instead?
Focus outline around `<input>`
## Test case
See https://react.carbondesignsystem.com/?path=/story/multiselect--filterable or https://react.carbondesignsystem.com/?path=/story/multiselect--filterable
Note that clicking more to the left will put the focus outline around the whole control:
<img width="342" alt="Screen Shot 2021-01-12 at 22 19 22" src="https://user-images.githubusercontent.com/69599/104319508-50571500-5524-11eb-9ccb-67ae50e1831f.png">
Presumably a regression from #4721.
| True | Filterable Multiselect focus outline on wrong node - When a filterable multiselect has some options selected, clicking on it's `<input>` puts focus outline on wrong node:
<img width="321" alt="Screen Shot 2021-01-12 at 22 05 18" src="https://user-images.githubusercontent.com/69599/104319050-a9727900-5523-11eb-9ff0-893e849bf8d7.png">
## Environment
> Operating system
MacOS
> Browser
Chrome
## Detailed description
> What version of the Carbon Design System are you using?
10.26
> What did you expect to happen?
Focus outline around whole control.
> What happened instead?
Focus outline around `<input>`
## Test case
See https://react.carbondesignsystem.com/?path=/story/multiselect--filterable or https://react.carbondesignsystem.com/?path=/story/multiselect--filterable
Note that clicking more to the left will put the focus outline around the whole control:
<img width="342" alt="Screen Shot 2021-01-12 at 22 19 22" src="https://user-images.githubusercontent.com/69599/104319508-50571500-5524-11eb-9ccb-67ae50e1831f.png">
Presumably a regression from #4721.
| main | filterable multiselect focus outline on wrong node when a filterable multiselect has some options selected clicking on it s puts focus outline on wrong node img width alt screen shot at src environment operating system macos browser chrome detailed description what version of the carbon design system are you using what did you expect to happen focus outline around whole control what happened instead focus outline around test case see or note that clicking more to the left will put the focus outline around the whole control img width alt screen shot at src presumably a regression from | 1 |
1,128 | 4,998,373,040 | IssuesEvent | 2016-12-09 19:39:02 | ansible/ansible-modules-core | https://api.github.com/repos/ansible/ansible-modules-core | closed | [ecs_taskdefinition] Invalid type for parameter containerDefinitions[0] | affects_2.1 aws bug_report cloud waiting_on_maintainer | ##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
`ec2_taskdefinition`
##### ANSIBLE VERSION
```
ansible 2.1.1.0
config file = /srv/code/ops/ansible/ansible.cfg
configured module search path = ['./library']
```
##### CONFIGURATION
##### OS / ENVIRONMENT
OSX
##### SUMMARY
An exception when trying to use variables for `cpu` and `memory` parameters in the `containers` list.
##### STEPS TO REPRODUCE
Run playbook:
``` yml
- hosts: localhost
connection: local
gather_facts: false
vars:
- myservice:
cpu: 512
memory: 512
tasks:
- ecs_taskdefinition:
state: present
family: "foo-taskdef"
containers:
- name: myservice
essential: true
cpu: "{{ myservice.cpu | int }}"
image: "myimage:latest"
memory: "{{ myservice.memory | int }}"
```
##### EXPECTED RESULTS
Create a new ECS task-definition with 512 cpu and memory.
##### ACTUAL RESULTS
```
An exception occurred during task execution. The full traceback is:
Traceback (most recent call last):
File "/var/folders/t_/_hgp934j183fzyshx_4gfkhc0000gn/T/ansible_TQ8DWI/ansible_module_ecs_taskdefinition.py", line 221, in <module>
main()
File "/var/folders/t_/_hgp934j183fzyshx_4gfkhc0000gn/T/ansible_TQ8DWI/ansible_module_ecs_taskdefinition.py", line 196, in main
module.params['containers'], volumes)
File "/var/folders/t_/_hgp934j183fzyshx_4gfkhc0000gn/T/ansible_TQ8DWI/ansible_module_ecs_taskdefinition.py", line 134, in register_task
containerDefinitions=container_definitions, volumes=volumes)
File "/Users/rafi/.local/share/python/envs/ansible-2/lib/python2.7/site-packages/botocore/client.py", line 278, in _api_call
return self._make_api_call(operation_name, kwargs)
File "/Users/rafi/.local/share/python/envs/ansible-2/lib/python2.7/site-packages/botocore/client.py", line 548, in _make_api_call
api_params, operation_model, context=request_context)
File "/Users/rafi/.local/share/python/envs/ansible-2/lib/python2.7/site-packages/botocore/client.py", line 601, in _convert_to_request_dict
api_params, operation_model)
File "/Users/rafi/.local/share/python/envs/ansible-2/lib/python2.7/site-packages/botocore/validate.py", line 270, in serialize_to_request
raise ParamValidationError(report=report.generate_report())
botocore.exceptions.ParamValidationError: Parameter validation failed:
Invalid type for parameter containerDefinitions[0].cpu, value: 512, type: <type 'str'>, valid types: <type 'int'>, <type 'long'>
Invalid type for parameter containerDefinitions[0].memory, value: 512, type: <type 'str'>, valid types: <type 'int'>, <type 'long'>
```
| True | [ecs_taskdefinition] Invalid type for parameter containerDefinitions[0] - ##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
`ec2_taskdefinition`
##### ANSIBLE VERSION
```
ansible 2.1.1.0
config file = /srv/code/ops/ansible/ansible.cfg
configured module search path = ['./library']
```
##### CONFIGURATION
##### OS / ENVIRONMENT
OSX
##### SUMMARY
An exception when trying to use variables for `cpu` and `memory` parameters in the `containers` list.
##### STEPS TO REPRODUCE
Run playbook:
``` yml
- hosts: localhost
connection: local
gather_facts: false
vars:
- myservice:
cpu: 512
memory: 512
tasks:
- ecs_taskdefinition:
state: present
family: "foo-taskdef"
containers:
- name: myservice
essential: true
cpu: "{{ myservice.cpu | int }}"
image: "myimage:latest"
memory: "{{ myservice.memory | int }}"
```
##### EXPECTED RESULTS
Create a new ECS task-definition with 512 cpu and memory.
##### ACTUAL RESULTS
```
An exception occurred during task execution. The full traceback is:
Traceback (most recent call last):
File "/var/folders/t_/_hgp934j183fzyshx_4gfkhc0000gn/T/ansible_TQ8DWI/ansible_module_ecs_taskdefinition.py", line 221, in <module>
main()
File "/var/folders/t_/_hgp934j183fzyshx_4gfkhc0000gn/T/ansible_TQ8DWI/ansible_module_ecs_taskdefinition.py", line 196, in main
module.params['containers'], volumes)
File "/var/folders/t_/_hgp934j183fzyshx_4gfkhc0000gn/T/ansible_TQ8DWI/ansible_module_ecs_taskdefinition.py", line 134, in register_task
containerDefinitions=container_definitions, volumes=volumes)
File "/Users/rafi/.local/share/python/envs/ansible-2/lib/python2.7/site-packages/botocore/client.py", line 278, in _api_call
return self._make_api_call(operation_name, kwargs)
File "/Users/rafi/.local/share/python/envs/ansible-2/lib/python2.7/site-packages/botocore/client.py", line 548, in _make_api_call
api_params, operation_model, context=request_context)
File "/Users/rafi/.local/share/python/envs/ansible-2/lib/python2.7/site-packages/botocore/client.py", line 601, in _convert_to_request_dict
api_params, operation_model)
File "/Users/rafi/.local/share/python/envs/ansible-2/lib/python2.7/site-packages/botocore/validate.py", line 270, in serialize_to_request
raise ParamValidationError(report=report.generate_report())
botocore.exceptions.ParamValidationError: Parameter validation failed:
Invalid type for parameter containerDefinitions[0].cpu, value: 512, type: <type 'str'>, valid types: <type 'int'>, <type 'long'>
Invalid type for parameter containerDefinitions[0].memory, value: 512, type: <type 'str'>, valid types: <type 'int'>, <type 'long'>
```
| main | invalid type for parameter containerdefinitions issue type bug report component name taskdefinition ansible version ansible config file srv code ops ansible ansible cfg configured module search path configuration os environment osx summary an exception when trying to use variables for cpu and memory parameters in the containers list steps to reproduce run playbook yml hosts localhost connection local gather facts false vars myservice cpu memory tasks ecs taskdefinition state present family foo taskdef containers name myservice essential true cpu myservice cpu int image myimage latest memory myservice memory int expected results create a new ecs task definition with cpu and memory actual results an exception occurred during task execution the full traceback is traceback most recent call last file var folders t t ansible ansible module ecs taskdefinition py line in main file var folders t t ansible ansible module ecs taskdefinition py line in main module params volumes file var folders t t ansible ansible module ecs taskdefinition py line in register task containerdefinitions container definitions volumes volumes file users rafi local share python envs ansible lib site packages botocore client py line in api call return self make api call operation name kwargs file users rafi local share python envs ansible lib site packages botocore client py line in make api call api params operation model context request context file users rafi local share python envs ansible lib site packages botocore client py line in convert to request dict api params operation model file users rafi local share python envs ansible lib site packages botocore validate py line in serialize to request raise paramvalidationerror report report generate report botocore exceptions paramvalidationerror parameter validation failed invalid type for parameter containerdefinitions cpu value type valid types invalid type for parameter containerdefinitions memory value type valid types | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.