added
stringdate 2025-04-01 04:05:38
2025-04-01 07:14:06
| created
timestamp[us]date 2001-10-09 16:19:16
2025-01-01 03:51:31
| id
stringlengths 4
10
| metadata
dict | source
stringclasses 2
values | text
stringlengths 0
1.61M
|
|---|---|---|---|---|---|
2025-04-01T06:38:34.292991
| 2023-06-15T12:48:33
|
1758757955
|
{
"authors": [
"NikolayMarusenko",
"Rolika4"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:5677",
"repo": "epam/edp-install",
"url": "https://github.com/epam/edp-install/issues/27"
}
|
gharchive/issue
|
Enable SAST scan for Tekton pipelines
As an EDP user, I would like to be able to use SAST scanning out of the box for tekton pipelines.
Acceptance Criteria:
SAST scan available out of the box for Tekton pipelines;
Enable only for build pipelines;
We have recently implemented a static application security testing feature for our EDP frameworks on build pipelines using DefectDojo.
This feature is available for application templates
Python (Python 3.8, FastAPI, Flask)
Go (Beego, Gin)
JavaScript (React, Vue, Angular, Next.js, Express)
Java (Maven, Gradle)
C# (.Net 3.1, .Net6.0)
As well as library templates:
Python (Python 3.8, FastAPI, Flask)
JavaScript (React, Vue, Angular, Next.js, Express)
Java (Maven, Gradle)
C# (.Net 3.1, .Net6.0)
This implementation will allow for improved security testing measures throughout our development process and ultimately result in higher-quality applications and libraries.
|
2025-04-01T06:38:34.303260
| 2016-01-13T14:25:58
|
126428336
|
{
"authors": [
"dwhoman",
"greenrd"
],
"license": "apache-2.0",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:5678",
"repo": "epiasini/XSDtoRNG",
"url": "https://github.com/epiasini/XSDtoRNG/issues/20"
}
|
gharchive/issue
|
Reference to undefined pattern comment
I tried converting http://www.liquibase.org/xml/ns/dbchangelog/dbchangelog-3.1.xsd to rnc using this stylesheet and then trang. However, when I tried to use the .rnc file in nxml-mode in emacs, it said: nxml-display-file-parse-error: Reference to undefined pattern comment.
My XSD is a bit rusty but it looks to me like comment is defined in the .xsd file, but is not defined in the .rng or .rnc files.
I managed to workaround this by specifying the start parameter.
I was able to get this schema to work first by applying greenrd's sed script to the xsd file, renaming it with 'mod', then using
xsltproc --stringparam start databaseChangeLog XSDtoRNG.xsl dbchangelog-3.1.mod.xsd > dbchangelog-3.1.rng. I then converted it to rnc with trang.
|
2025-04-01T06:38:34.306157
| 2019-11-22T07:19:21
|
527031031
|
{
"authors": [
"Heavenwalker",
"asvae",
"haizad"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:5679",
"repo": "epicmaxco/vuestic-admin",
"url": "https://github.com/epicmaxco/vuestic-admin/issues/680"
}
|
gharchive/issue
|
Pagination Documentation
I think a pagination entry is needed in the documentation. It works really nice and it's shame if someone overlooks it.
Using it right now with markup-tables
Thanks a lot for this wonderful template
Thanks for suggestion.
We're going to add docs as a part of vuestic-ui. Here's some not feedback ready work: http://vuestic-ui-develop-docs.sub.asva.by/components/VaPagination.html :).
Hi, any news on this? I am looking forward in using pagination entry on this template.
Thank you :D
Here's the new link: https://vuestic.dev/en/ui-elements/pagination.
Things are very close to release. We'll time update vuestic-admin with vuestic-ui.
|
2025-04-01T06:38:34.337441
| 2021-04-09T07:34:50
|
854256392
|
{
"authors": [
"jimmykarily",
"kkaempf"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:5680",
"repo": "epinio/epinio",
"url": "https://github.com/epinio/epinio/issues/263"
}
|
gharchive/issue
|
Split tests ...
... for parallelization, more focussed test areas, and speedup.
To be tackled post-mvp
We have tests that install/uninstall Epinio or some components. As far as I can tell, these are the features tests (which enable/disable in-cluster services etc)
There are also other tests that enable the components if they are not there already but never disable them. That mean we could simply enable those components in the BeforeSuite block.
All other tests can safely run against the same Epinio instance (and the same Kubernetes cluster). That means we don't need GINKGO_NODES number of clusters but just one. Enabling in-cluster services is optional because it will probably never be used in production environments. The gke service is optional because it need configuration (auth) that only the users who want to use google cloud will have available thus it makes no sense to do it in epinio install.
We need to find a way to run the tests that mutate the cluster serialized somehow and on a separate cluster (ginkgo doesn't seem to support this yet: https://github.com/onsi/ginkgo/issues/526). We can run all the rest in parallel on the same cluster. This limits the number of clusters we need to just 2. The mutating tests are not expected to grow as much as the other tests so it's sane to expect this to keep working for a while.
An option would be to separate the mutating tests to a new test suite.
|
2025-04-01T06:38:34.350393
| 2018-12-02T07:59:10
|
386537120
|
{
"authors": [
"DavidIzaac",
"epoberezkin"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:5681",
"repo": "epoberezkin/ajv",
"url": "https://github.com/epoberezkin/ajv/issues/894"
}
|
gharchive/issue
|
Optional property
How to make an optional property ?
They are optional by default. Please see JSON schema spec
|
2025-04-01T06:38:34.419972
| 2020-06-19T11:02:50
|
641889333
|
{
"authors": [
"ooystein"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:5682",
"repo": "equinor/esv-intersection",
"url": "https://github.com/equinor/esv-intersection/pull/336"
}
|
gharchive/pull-request
|
fix overlaps check
The isBetween function did not return true when either top or bottom of cement was equal to top or bottom of hole. (Check was < or >, needs to be <= and >=)
Think I might have simplified it a little to much. Will check and update PR before review is needed
Fixed the overlap check and now all cements we are testing with in wellx-designer renders correctly.
Added tests
|
2025-04-01T06:38:34.426320
| 2024-09-12T06:09:09
|
2521437942
|
{
"authors": [
"hknutsen"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:5683",
"repo": "equinor/terraform-azurerm-network",
"url": "https://github.com/equinor/terraform-azurerm-network/pull/78"
}
|
gharchive/pull-request
|
refactor!: simplify subnet configuration
BREAKING CHANGE: remove subnet object properties network_security_group, route_table and nat_gateway. Add subnet object properties security_group_id, route_table_id and nat_gateway_id.
Depends on hashicorp/terraform-provider-azurerm#27199
|
2025-04-01T06:38:34.475923
| 2018-02-28T20:14:11
|
301166680
|
{
"authors": [
"eddieschoute",
"erezsh"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:5684",
"repo": "erezsh/lark",
"url": "https://github.com/erezsh/lark/issues/98"
}
|
gharchive/issue
|
LALR parser throws gives UnexpectedToken exception with optional string
I have written a grammar that succesfully passes my testcases with the default parser. Now I am trying to convert it to an LALR parser. But this parser throws an exception when parsing a fairly sentence.
Expected Behavior
Lark should either give an error that certain constructions in the grammar are not allowed for LALR parsing or parse the grammar correctly.
Current Behavior
I have the following grammar rule:
qgate : "QGate[" STRING "]" ["*"] "(" INT ")"
which I apply to the following sentence
QGate["not"](0)
This worked succesfully in with the normal parser, but with the LALR parser I get an error
Error
Traceback (most recent call last):
File "/Users/eddie/dev/quippy/.venv/lib/python3.6/site-packages/lark/parsers/lalr_parser.py", line 46, in get_action
return states[state][key]
KeyError: '__ANONSTR_8'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/local/Cellar/python3/3.6.4_2/Frameworks/Python.framework/Versions/3.6/lib/python3.6/unittest/case.py", line 59, in testPartExecutor
yield
File "/usr/local/Cellar/python3/3.6.4_2/Frameworks/Python.framework/Versions/3.6/lib/python3.6/unittest/case.py", line 605, in run
testMethod()
File "/Users/eddie/dev/quippy/test_quipper_parser.py", line 44, in test_gatelist_qgate
parsed = parser.parse(basic_text)
File "/Users/eddie/dev/quippy/.venv/lib/python3.6/site-packages/lark/lark.py", line 197, in parse
return self.parser.parse(text)
File "/Users/eddie/dev/quippy/.venv/lib/python3.6/site-packages/lark/parser_frontends.py", line 37, in parse
return self.parser.parse(token_stream)
File "/Users/eddie/dev/quippy/.venv/lib/python3.6/site-packages/lark/parsers/lalr_parser.py", line 73, in parse
action, arg = get_action(token.type)
File "/Users/eddie/dev/quippy/.venv/lib/python3.6/site-packages/lark/parsers/lalr_parser.py", line 50, in get_action
raise UnexpectedToken(token, expected, seq, i)
lark.common.UnexpectedToken: Unexpected token Token(__ANONSTR_8, '](') at line 1, column 11.
Expected: dict_keys(['__RSQB'])
Context: <no context>
When I modify the text to (by adding a '*')
QGate["not"]*(0)
it parses succesfully. Alternatively, I can change the grammar rule to
qgate : "QGate[" STRING "](" INT ")"
and that also works. It would seem to me that an optional symbol should be possible in an LR grammar (please correct me if I'm wrong), so where does this go awry?
Environment
OS: MacOS 10.13
Lark: 0.5.4
Okay, so here's the reason for the error. In LALR, the lexer is by design deterministic. So if it has two terminals somewhere in the grammar, ] and ](, and it sees ]( in the input, it has to choose between them, and cannot try both. By default it chooses the longer one, though you can change that with priority.
What I'm saying is, somewhere else in the grammar there's a ]( terminal, and I suggest you break it into ] and (.
Yes, thank you. It is clear I needed to learn how a lexer behaves.
|
2025-04-01T06:38:34.497913
| 2024-03-13T01:46:03
|
2182957835
|
{
"authors": [
"KzZheng",
"oldwangggggg"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:5685",
"repo": "eric-ai-lab/MiniGPT-5",
"url": "https://github.com/eric-ai-lab/MiniGPT-5/issues/45"
}
|
gharchive/issue
|
RuntimeError: expected scalar type BFloat16 but found Float
when i run train_eval.py and do the evaluation stage1, i met this question and have no solution.
Can you provide more error details? This part seems an auto-casting issue, which should be automatically handled by lightening.
|
2025-04-01T06:38:34.536712
| 2019-01-16T21:58:06
|
400016478
|
{
"authors": [
"conorg763",
"nithinkashyapn",
"renatoluz"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:5686",
"repo": "ericfourrier/scrape-linkedin",
"url": "https://github.com/ericfourrier/scrape-linkedin/issues/17"
}
|
gharchive/issue
|
Error
Hi there, is there any thing to do in this case?
pylinkedin.exceptions.ServerIpBlacklisted: Linkedin blacklists ips for unauthentified http requests, Aws, Digital Ocean
Hey,
LinkedIn blacklists all IPs from major cloud providers so the only way i see you can use it is by using a different cloud provider. There are many - Vultr, Scaleway, OVH and many more. I haven't tried running it on any VPS. You can find great offers on lowendbox.
Do tell which one works.
Getting the same issue as @renatoluz
|
2025-04-01T06:38:34.554698
| 2018-07-10T05:45:21
|
339700724
|
{
"authors": [
"deflock",
"ericmorand",
"nedkelly"
],
"license": "bsd-2-clause",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:5687",
"repo": "ericmorand/twing",
"url": "https://github.com/ericmorand/twing/issues/230"
}
|
gharchive/issue
|
Twing throws: "SyntaxError: missing ) after argument list" whenever it encounters a ` character.
Hi, it took me a while to track down this issue as I'm currently porting a large project to Twing from Twig.js to be more compatible with our TwigPHP environment.
After a lot of digging I tracked down a "SyntaxError: missing ) after argument list" error to any template that contains a ` character, no matter if the ` is in a string or comment.
Example:
{% block content %}
<p>Some text containing `back tick characters` that we use to parse with a custom markdown tag</p>
{% endblock %}
or:
{# Some sample code `<div class="example">Your code here</div>` #}
We have a lot of templates with these characters as we parse the contents of blocks with markdown to form an internal style-guide and coding guide. To be honest I'm not sure if this is a bug or it can be avoided using some escape configuration but simply doing \` fixes the parser but won't work when producing the template.
Any ideas?
Cheers.
I've reproduced this. It happens because backticks are not escaped in getSourceContext() method.
@nedkelly, @deflock, I reproduce it when the environment debug option is set to true. I'll fix it in no time but for now you should be able to avoid this issue by setting debug to false.
@deflock, you are totally right, this comes from the getSourceContext content of the pre-compiled template. I don't remember if there is a reason for using compiler.raw instead of compiler.string. It was a bad idea, anyway.
Fixed in<EMAIL_ADDRESS>
|
2025-04-01T06:38:34.572186
| 2023-06-17T14:16:24
|
1761872195
|
{
"authors": [
"clementepestelli",
"lx-0",
"lys623",
"pyronaur",
"zcpua"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:5688",
"repo": "erictik/midjourney-client",
"url": "https://github.com/erictik/midjourney-client/issues/136"
}
|
gharchive/issue
|
client.Describe No result returned
the code that reproduces this issue or a replay of the bug
const imgUrl=`https://cdn.discordapp.com/attachments/1008571049039896576/1119473369813884958/manuelcorazzari_dirty_hands_holding_dirt_from_the_ground_sun_li_98bfb5e5-8c4f-4cb3-879a-bc229108e505.png`
const msg = await client.Describe(imgUrl);
console.log({ msg });
Describe the bug
No result returned
error log
No result returned
ws:true
https://github.com/erictik/midjourney-client/blob/main/example/describe.ts
I have the same problem with describe
The Code
const msg = await client.Describe( "https://img.ohdat.io/midjourney-image/1b74cab8-70c9-474e-bfbb-093e9a3cfd5c/0_1.png" );
console.log({msg});
The output:
https://github.com/erictik/midjourney-client/blob/main/example/describe.ts#L18
Yeah, I saw that client.Connect() but it gave me this error:
TypeError: client.Connect is not a function
at main (/*******/index.js:12:18)
at Object.<anonymous> (/*******/index.js:24:1)
at Module._compile (node:internal/modules/cjs/loader:1254:14)
at Module._extensions..js (node:internal/modules/cjs/loader:1308:10)
at Module.load (node:internal/modules/cjs/loader:1117:32)
at Module._load (node:internal/modules/cjs/loader:958:12)
at Function.executeUserEntryPoint [as runMain] (node:internal/modules/run_main:81:12)
at node:internal/main/run_main_module:23:47
Maybe I'm forgetting something important?
npm install midjourney
Ok,
ny default npm installed v 2.7.79.
I forced the package.json to 3.0.80 and it works!
Thank you!
Ok, ny default npm installed v 2.7.79. I forced the package.json to 3.0.80 and it works! Thank you!
Still no. What's wrong, "midjourney": "^3.0.81"
help @clementepestelli
The task was successfully created, but the returned result was not broadcast
有毒,仓库代码也不行,其他setting API,reset api都可以,就这个Describe 不行,救命
I have the same issue with the example. No msg is returned from Describe().
Doesn't work for me either, using the latest version. Node 20<EMAIL_ADDRESS>no matter if I enable websockets or not - it submits the describe request and I can find the response in the discord channel, but MJ API never catches the response. Maybe because it's a public channel?
|
2025-04-01T06:38:34.636692
| 2017-04-06T18:12:01
|
219985062
|
{
"authors": [
"GASPARDYP",
"ericyd",
"karynrose1784"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:5689",
"repo": "ericyd/gdrive-copy",
"url": "https://github.com/ericyd/gdrive-copy/issues/15"
}
|
gharchive/issue
|
Folder Copying Won't Start - Returns Blank Screen
I just started using the app today and successfully used it twice this morning, including one folder that was very large (over 2000 total files). However, when I try to start a new folder now, once I click "copy folder," it goes to a blank screen and doesn't start the copy (I checked my Drive to see if the new folder had been created and it was working in the background). I also tried using "Resume" to see if a previous copy was actually still in progress, but when I click "Resume copying," the same thing happens with the blank screen
I am having the same issue, and my folder is not too large. Actually, i thought that could be the issue and tried with a smaller folder and the same happened
I can reproduce, but this isn't an issue with the app. Google must have changed something with their Google Apps Service, which I imagine will be fixed soon. It appears that they changed the headers that allow the google.script service to be accessed.
This is causing the google.script service to not be found.
I don't have any control over this, but I'll leave the issue open until Google resolves it.
Thanks for your quick response, Eric. Seems whatever it was has been resolved as I've been able to use the app just fine this morning.
And, thank you for creating &maintaining this - it's been a real lifesaver in managing my drive.
I´m still experiencing the issue
|
2025-04-01T06:38:34.646171
| 2024-11-08T11:33:24
|
2643774955
|
{
"authors": [
"michelemodolo"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:5690",
"repo": "erigontech/erigon-snapshot",
"url": "https://github.com/erigontech/erigon-snapshot/pull/329"
}
|
gharchive/pull-request
|
[automation] Updated mainnet.toml for erigon3 up to 21.141M
This is an AUTOMATIC PR raised from this machine (which is running erigon3): snapshotter-bm-e3-ethmainnet-n1
I'm closing this PR as we were waiting for the new Caplin state files, which were not produced because of a downloader glitch in creating their torrents' hashes
|
2025-04-01T06:38:36.018985
| 2017-02-25T02:10:56
|
210202490
|
{
"authors": [
"codecov-io",
"fernandolobato"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:5691",
"repo": "erikiado/jp2_online",
"url": "https://github.com/erikiado/jp2_online/pull/20"
}
|
gharchive/pull-request
|
Fixed issue with missing migration
There were changes in the model and no makemigrations command was ran.
What's this PR do?
Fix issue #19
Where should the reviewer start?
How should this be manually tested?
Any background context you want to provide?
This template was adapted ~stolen~ from Quickleft/Sprint.ly
Codecov Report
Merging #20 into develop will increase coverage by 0.12%.
The diff coverage is 100%.
@@ Coverage Diff @@
## develop #20 +/- ##
==========================================
+ Coverage 90.78% 90.9% +0.12%
==========================================
Files 46 47 +1
Lines 423 429 +6
==========================================
+ Hits 384 390 +6
Misses 39 39
Impacted Files
Coverage Δ
familias/migrations/0005_auto_20170225_0116.py
100% <100%> (ø)
Continue to review full report at Codecov.
Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update e9c30b5...66cab5c. Read the comment docs.
|
2025-04-01T06:38:36.038204
| 2020-11-25T01:45:54
|
750258668
|
{
"authors": [
"eriknyquist",
"eriknyquist-avive",
"jhale1805"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:5692",
"repo": "eriknyquist/tones",
"url": "https://github.com/eriknyquist/tones/issues/3"
}
|
gharchive/issue
|
Different input frequencies are getting played as the same output frequency
In #2 I referenced a bug I was investigating. Here it is.
Background
I'm trying to use this library to prototype a personal data over sound project. I'm working on using the presence of a sound at any of 24 different frequencies to convey the data my application needs. Eventually I will be using several of these frequencies in parallel, which maps really well to this package's support of different tracks.
Issue
The issue I'm finding is that when I use this package to play a test sound at each of my chosen 24 frequencies I notice that some of the higher frequencies get played as the same tone.
Low Frequency Example
Here's the code:
from tones.mixer import Mixer
from tones import SINE_WAVE
from playsound import playsound
mixer = Mixer(44100, 0.5)
mixer.create_track(1, SINE_WAVE, attack=0.01, decay=0.1)
mixer.add_tone(1,<PHONE_NUMBER>76724, .5)
mixer.add_tone(1,<PHONE_NUMBER>63368, .5)
mixer.add_tone(1, 1000.40016006403, .5)
mixer.add_tone(1, 1100.5943209333, .5)
mixer.add_tone(1, 1301.06687483737, .5)
mixer.add_tone(1, 1400.56022408964, .5)
mixer.add_tone(1, 1500.60024009604, .5)
mixer.add_tone(1, 1601.53747597694, .5)
mixer.add_tone(1, 1799.20834832674, .5)
mixer.add_tone(1, 1899.69604863222, .5)
mixer.add_tone(1, 2000.80032012805, .5)
mixer.add_tone(1, 2100.84033613445, .5)
mixer.add_tone(1, 2296.73863114378, .5)
mixer.add_tone(1, 2396.93192713327, .5)
mixer.add_tone(1, 2497.5024975025, .5)
mixer.add_tone(1, 2597.4025974026, .5)
mixer.add_tone(1, 2801.12044817927, .5)
mixer.add_tone(1, 2903.60046457607, .5)
mixer.add_tone(1, 3001.20048019208, .5)
mixer.add_tone(1, 3105.5900621118, .5)
mixer.add_tone(1, 3306.87830687831, .5)
mixer.add_tone(1, 3401.36054421769, .5)
mixer.add_tone(1, 3501.40056022409, .5)
mixer.add_tone(1, 3607.50360750361, .5)
mixer.write_wav('tones.wav')
playsound('tones.wav')
And here's a spectrogram of the frequencies I'm getting as a result:
High Frequency Example
It appears as though playing higher frequencies exacerbates the problem. This code plays the same frequency intervals (not note intervals) but starting 3000 Hz higher than the previous example.
from tones.mixer import Mixer
from tones import SINE_WAVE
from playsound import playsound
mixer = Mixer(44100, 0.5)
mixer.create_track(1, SINE_WAVE, attack=0.01, decay=0.1)
mixer.add_tone(1, 3990.42298483639, .5)
mixer.add_tone(1, 4105.09031198686, .5)
mixer.add_tone(1, 4201.68067226891, .5)
mixer.add_tone(1, 4302.92598967298, .5)
mixer.add_tone(1, 4492.36298292902, .5)
mixer.add_tone(1, 4608.29493087558, .5)
mixer.add_tone(1, 4699.24812030075, .5)
mixer.add_tone(1, 4793.86385426654, .5)
mixer.add_tone(1, 4995.004995005, .5)
mixer.add_tone(1, 5102.04081632653, .5)
mixer.add_tone(1, 5213.76433785193, .5)
mixer.add_tone(1, 5291.00529100529, .5)
mixer.add_tone(1, 5494.50549450549, .5)
mixer.add_tone(1, 5580.35714285714, .5)
mixer.add_tone(1, 5714.28571428571, .5)
mixer.add_tone(1, 5807.20092915215, .5)
mixer.add_tone(1, 6002.40096038415, .5)
mixer.add_tone(1, 6105.0061050061, .5)
mixer.add_tone(1, 6211.1801242236, .5)
mixer.add_tone(1, 6321.11251580278, .5)
mixer.add_tone(1, 6493.50649350649, .5)
mixer.add_tone(1, 6613.75661375661, .5)
mixer.add_tone(1, 6675.56742323097, .5)
mixer.add_tone(1, 6802.72108843537, .5)
mixer.write_wav('tones.wav')
playsound('tones.wav')
Here's the resulting spectrogram.
Other Thoughts
I'm wondering if this has something to do with how this package focuses on playing specific notes (i.e. music composition). Perhaps my frequencies are simply getting rounded to the closest note? I didn't see anything that would seem to be doing that in the mixer.add_tone() function, but it's an idea that I've had nagging at me while I look through things.
I'm happy to help develop a solution to improve this great package, I'm just getting stuck when I tackle it on my own. Hoping @eriknyquist has some added insight into why this might be occurring.
@jhale1805 I think you hit the nail on the head when you said "I'm wondering if this has something to do with how this package focuses on playing specific notes (i.e. music composition)?"
I never did any sort of detailed testing of specific frequencies, like you are doing now. This module was very much intended for producing musical tones, and the extent of my testing was pretty much just using my ears to make sure things sound musically OK.
That being said, I will take a look at the code which handles specific frequency values, and see if there is an obvious problem that could cause such a loss of precision.
Thanks!
@jhale1805 I can possibly help speed up your investigation; the problem is most likely with the _sine_wave_table function here https://github.com/eriknyquist/tones/blob/master/tones/tone.py#L6
This function is called by the Tone.samples() function (right here https://github.com/eriknyquist/tones/blob/master/tones/tone.py#L197), to obtain a set of samples that make up a single 360 degree sine wave oscillation in the desired frequency/sample rate/amplitude.
The Tone.samples() function then iterates over this table multiple times, as many times as is needed to create the number of samples we need for the requested note time.
My guess is that there is some loss of precision that occurs when I do period = int(rate / freq) in the _sine_wave_table function, and this is what's causing the anomaly you're seeing where the output seems to "snap" to certain frequencies.
I'm not sure exactly how I would resolve that, right now, but this is the area I'm drawn to right now based on your description.
OK, so after I explained that to you I'm thinking that the problem is indeed _sine_wave_table, or more specifically, the approach of generating a single period's worth of sine wave samples and then duplicating it multiple times to get the desired note length.
This approach assumes that the full period of any sine wave at any frequency can be described by a discrete number of samples, when in reality, the full period of a sine wave is likely going to have some "fractional" sample at the end (e.g. a full period of 1555Hz, at 44100 sample rate, works out to 28.36 samples), unless the sine wave frequency happens to be an exact multiple of the sample rate.
This might also explain the weird harmonics reported in #1, since the issue I described above would result in sine waves that are not perfect-- there would be little "blips" in the waveform between every period, which I'm guessing would result in some odd harmonic content.
I think the correct way to do this would be to generate all samples for the full note length at once, instead of just doing a single period and then duplicating it, just like is being done in this stackoverflow answer;
https://stackoverflow.com/questions/8299303/generating-sine-wave-sound-in-python
I don't have time to work on it and test it right now (I can get to that next weekend), but I thought I would just dump that info here in case it helps you out.
I think you're on to something. I plotted the output waveform using this Stack Overflow post as a guide (https://stackoverflow.com/a/18625294) and got the following output (zoomed in a lot).
Just as you predicted, there is an odd change of slope at the end of each period that I don't think can be attributed to the minor imperfections introduced by using digital samples instead of an analog signal.
I'll try out the solution you found and give another update in a bit.
Just for reference, the exact code that produced that image
from tones.mixer import Mixer
from tones import SINE_WAVE
mixer = Mixer(44100, 0.5)
mixer.create_track(1, SINE_WAVE, attack=0.01, decay=0.1)
mixer.add_tone(1, 2597.4025974026, .25)
mixer.add_tone(1, 2695.41778975741, .25)
mixer.add_tone(1, 2801.12044817927, .25)
mixer.add_tone(1, 3001.20048019208, .25)
mixer.add_tone(1, 3105.5900621118, .25)
mixer.add_tone(1, 3203.07495195388, .25)
mixer.add_tone(1, 3306.87830687831, .25)
mixer.add_tone(1, 3501.40056022409, .25)
mixer.add_tone(1, 3607.50360750361, .25)
mixer.add_tone(1, 3700.96225018505, .25)
mixer.add_tone(1, 3799.39209726444, .25)
mixer.write_wav('tones.wav')
#Addition
import wave
import numpy as np
import matplotlib.pyplot as plt
spf = wave.open('tones.wav')
signal = spf.readframes(-1)
signal = np.fromstring(signal, "Int16")
plt.plot(signal)
plt.show()
#/Addition
After some more tinkering it looks like your suggested solution works great!
The "Low Frequency" code from my original post now registers on the spectrogram as expected:
In addition to each distinct input frequency now getting its own distinct output, you'll notice that my original screenshot had a thin green line showing that the output frequency of the twelfth tone was 2203 Hz - a full 103 Hz off of the original ~2100 Hz. This new version now plays that same twelfth tone at 2103 Hz - only 3 Hz off of the original.
The harmonics are still present, but not as harshly as before.
The "High Frequency" example also works much better now:
And the output wave form also pretty much looks like a perfect sine wave.
You'll see that I submitted a merge request with my solution. As indicated there, I only tested for my specific use case, so you'll want to make appropriate updates to the other waveforms you support that I'm not as familiar with before re-publishing this package to pip.
Thanks again for this great package and your help with this issue!
|
2025-04-01T06:38:36.067751
| 2015-10-19T13:37:43
|
112146031
|
{
"authors": [
"Jevgenius",
"anmol1591",
"anselmdk",
"astrauka",
"chaitanya0bhagvan",
"erikras",
"gabrielhpugliese",
"himawan-r",
"joaoreynolds",
"kaueburiti",
"mohebifar",
"saitonakamura",
"szokrika",
"xcatliu",
"zackshen"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:5693",
"repo": "erikras/redux-form",
"url": "https://github.com/erikras/redux-form/issues/152"
}
|
gharchive/issue
|
Best practice to manipulate a field's value
Assume that I need to change the value of username input field upon the change of last name field. The reason why the username should be kept on the input is to let the user manipulating it. The way to do it is to call the change action to change the value of username input. But we need to override the onChange prop on lastName input. It seems sloppy and I wonder if it has drawbacks.
<input {...lastName} />
<input {...username} />
The best practice to manipulate field2's value based on field1's value is to use a normalizer.
Thanks.
One other thing, let's say if there was no field1. For example, if we needed to change lat and lng pair input values by dragging around a marker on a Leaflet or Google map.
Same thing. If you want to restrict a value based on anything, you can "normalize" it. The code in the readme shows how to keep a string value in all upper case, for example. Just as easy to keep a lat-long coordinate within a certain radius.
No I didn't mean to keep it in a certain radius. See the image. I mean changing those values by dragging that blue marker while the users can change it themselves.
Normalizing is done in reducer. What I say is in component layer.
Sure, that can be done with onChange calls. Ideally, you'd have a map component that would either call onChange as the user drags the marker, and/or onBlur when the marker was dropped.
I got it. :+1: Thanks.
@erikras, do you mean call the this.props.fields.xxx.onChange() directly?
@xcatliu Yes, you may do that. If onChange receives an event (if it was passed to an input), it will try to get the value from the event.target, but you can also just call onChange(newValue).
@erikras Thx~
@erikras How can I know the ending of onChange? Is there any API like this:
onChange(newValue, callback);
No, onChange is synchronous.
@erikras If I call this.props.handleSubmit() after onChange, the values seems not update. Code fragment:
handleClick() {
// do something
this.props.fields.xxx.onChange(newValue);
console.log(this.props.values); // not update
this.props.handleSubmit(); // submit old values
}
That is true. The props get repopulated on a subsequent process tick.
I suspect that there is a flaw in your design if you are wanting to onChange and handleSubmit in the same code block. But you could conceivably do something like:
this.props.onSubmit({ // <---- what handleSubmit would call
...this.props.values,
xxx: newValue
});
@erikras Yes I get it, thanks for helping me!
Is there any way to change multiple values at once without using initialize()? What I'm doing right now is:
const values = getValues(form.signupPF);
initialize('signupPF', Object.assign({}, values, {
address,
neighborhood,
city,
state
}), Object.keys(fields));
Is this correct?
@gabrielhpugliese No, there is no form of the CHANGE action that works on multiple fields. If you didn't want to use initialize, which will affect your dirty/pristine state, you will have to dispatch the CHANGE actions individually for each field.
No problems with initialize, just to know if it would be OK or would cause some weird side-effect. Thanks.
@erikras when I'm trying to call this.props.onSubmit I'm getting Uncaught TypeError: this.props.onSubmit is not a function. I also don't find it in http://erikras.github.io/redux-form/#/api/props - maybe the api has changed?
...I'm using 4.1.4.
Tried this.props.handleSubmit?
On Thu, Feb 11, 2016 at 5:33 PM Anselm Christophersen <
<EMAIL_ADDRESS>wrote:
...I'm using 4.1.4.
—
Reply to this email directly or view it on GitHub
https://github.com/erikras/redux-form/issues/152#issuecomment-183025937.
Yeah, that works, but I want to pass additional data, like this:
this.props.onSubmit({ // <---- what handleSubmit would call
...this.props.values,
xxx: newValue
});
Also this.props.handleSubmit(); seems to work, while doing this and trying to manipulate the data seems to just fail silently.
this.props.handleSubmit(data => {
});
@anselmdk Because onSubmit is a specific proprietary prop for redux-form, it is removed from the props. You will need to call your prop something else (e.g. submitForm) for it to show up in the props of your decorated component.
@erikras thanks. In the end I ended up using the solution with the hidden field, as I also needed to do some different validation based on the field value - seems to work okay!
@erikras
The props get repopulated on a subsequent process tick
I am using async validation to fetch postal code from server once member changes it.
Member also has ability to change country, thus I need to rerun the validation on country change.
Currently I delay the postal code field touch to allow country field to update it's value.
Is there an option to determine when the field got updated such that I don't wait too long or too short?
the ability to run onChange on multiple values in one action would be great. Or rather, in my situation, I want to replace an entire subdocument array with a new one.
In other words, I have a "field": 'collectionFields[].orderNum'. collectionFields is an array of input fields my users can configure and reorder. So I'm changing the orderNum values for collectionFields. I was hoping to just take my array of collectionFields, change the ordering of them, and replace it with an identical array of redux-form objects, but with new values for orderNum. (my reordering component returns a new array of objects with orderNum values) Something like:
this.props.fields.collectionFields.replace(orderedFields) This assuming orderedFields is an array of redux-form instances (or whatever you call them).
My alternative working solution for now is:
this.props.fields.collectionFields.map((myFormField, index) => {
myFormField.orderNum.onChange(orderedFields[index].orderNum.value)
})
It's annoying though, because the redux-form/CHANGE action is dispatched a whole bunch of times for what seems could be done in a single action.
Just an enhancement recommendation.
Hi, I'm trying to do this:
addressUpdated(newAddress) {
//TODO, tell Redux form that a value is now available!
this.props.fields.address.onChange(newAddress.label);
}
address is a hidden field that should get a value once addressUpdated is called.
I get an error
Uncaught TypeError: Cannot read property 'onChange' of undefined
Component is generated:
<Field id="address" name="address" type="hidden" component={fieldFactory} />
const fieldFactory = ({id, input, label, type, meta: { touched, error } }) => {
if(type.match(/hidden/)){
return(
<div>
<input id={id} {...input} type={type} />
{touched && error && <span>{error}</span>}
</div>
);
}
}
Any ideas?
@szokrika I had the same issue when migrating from v5 to v6. I solved it like this by giving a ref and adding withRef={true} to the Field I would like to modify.
<Field type='text' ref='name' withRef={true} label='Name' name='Name' component={renderInput} value={this.state.location.name}/>
When I want to change the field value I do this
this.refs.name.getRenderedComponent().props.input.onChange(newName);
Please note this Cannot be used if your component is a stateless function component
Hey!
Im kinda new to react. And couldn't figure out how and where i should rewrite my onChange to get things working. PS. Im using react-select as my selectInput component.
here is part of my form. I would appriciate of any concrete examples with the custom onChange.
const BasicForm = props => {
const { error, handleSubmit, pristine, reset, submitting, countries, phonePrefixes } = props;
return (
<div className="form step1">
<form onSubmit={handleSubmit}>
<Field
name="country"
className="form-control"
component={selectInput}
options={countries}
placeholder="Country"
/>
<Field
name="phonePrefix"
className="form-control"
component={selectInput}
options={phonePrefixes}
placeholder="Prefix"
/>
<button type="submit" disabled={submitting}>REGISTER
<i className="fa fa-chevron-right"> </i>
</button>
</form>
</div>
)};
@Jevgenius in your case you should write the custom onChange in the selectInput file, and pass this custom onChange to the react-select.
@erikras i'm using redux-form v6, where is props.fields property ? i can't find it in the api document, Does it be removed ?
@zackshen It was removed, see the v5->v6 migration guide
Stories
storiesOf("URLField", module)
.add("default", () => (
))
code
const URLField = ({ validUrl , urlDescription, primaryLabel, warningText, secondaryLabel }) => (
{primaryLabel}
{secondaryLabel}
{urlDescription}
{warningText}
);
CSS
input {
text-align: right;
display: inline-block;
padding: 10px 2px;
background: ${props => props.validUrl === "true" ? url(${valid}) no-repeat left 10px center : url(${invalid}) no-repeat left 10px center};
}
Any idea about how to change background
@xcatliu if i may ask, where did you get the reference this.props.fields.xxx.onChange()? i wanted to try this, i cannot seems to get the fields reference from onchange of my first field?
|
2025-04-01T06:38:36.071622
| 2016-01-20T19:27:45
|
127764767
|
{
"authors": [
"erikras",
"nschurmann"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:5694",
"repo": "erikras/redux-form",
"url": "https://github.com/erikras/redux-form/issues/574"
}
|
gharchive/issue
|
resetForm() leaves a ↵ when executed.
I have this form:
/**
* React component MessagesForm
*/
import React, { PropTypes } from 'react'
import {reduxForm, reset} from 'redux-form'
const MessagesForm = p => {
const {fields: {message}} = p
const addMessage = e => {
if(e.keyCode == 13 && e.shiftKey == false) {
p.handleSubmit()
p.resetForm()
}
}
return (
<form onSubmit={p.handleSubmit}>
<div className="form-group">
<textarea
{...message}
value={message.value || ""}
rows="1"
type="text"
className="form-control"
onKeyDown={addMessage}
placeholder="Escribe tu mensaje y presiona Enter para enviar." />
</div>
</form>
)
}
const form = reduxForm({
form: 'message',
fields: ['message']
})(MessagesForm)
export default form
As you can see the form is submitted when the user press the enter key (↵) and is not pressing shift. after handling the submit i reset the for immediately, but it leaves a ↵ sign instead of an empty string, so it can show the placeholder again.
I need to be reset to an empty string instead of a ↵, how can i do this?
As a workaround, adding setTimeout(p.resetForm, 1) makes it work. Still looking for a solution.
Shouldn't you have a e.preventDefault() to prevent the ↵ from making it into the input?
@erikras thanks!!! :D
Reopen if this is not solved.
|
2025-04-01T06:38:36.072687
| 2021-04-13T09:33:23
|
856777498
|
{
"authors": [
"erikrikarddaniel"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:5695",
"repo": "erikrikarddaniel/magmap",
"url": "https://github.com/erikrikarddaniel/magmap/issues/7"
}
|
gharchive/issue
|
Remove stable RNA
Use the tool in the same package as BBMap.
Fixed in 94e634465505874002b4431c4988b9b5df34eccd with --sequence_filter parameter.
|
2025-04-01T06:38:36.096852
| 2020-12-22T03:27:29
|
772608103
|
{
"authors": [
"curiousdannii",
"erkyrath"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:5696",
"repo": "erkyrath/glkote",
"url": "https://github.com/erkyrath/glkote/issues/46"
}
|
gharchive/issue
|
Support multiple instances of each module
Currently, every module in the Glk JS ecosystem is a global object (window.GlkOte, window.Dialog, window.Quixe, etc). We would like to support the possibility of more than one. It should be possible to create instances of each and start them running, with each one talking only to its peers.
This was one of the issues mentioned when talking about ES modules (https://github.com/erkyrath/glkote/pull/39). However, I want to address this separately and first.
The plan is as follows:
Every module will define a JS class (e.g. GlkOteClass). (Recall that JS classes are just functions that you instantiate by writing new GlkOteClass().)
For backwards compatibility, each module will define an instance of its class (GlkOte). If you load the module.js in the old-fashioned way, you will wind up with window.GlkOteClass and window.GlkOte. A page can go ahead using window.GlkOte just as before. However, you can create more instances as needed.
An instance must be inited by calling its init() method. You may pass in associated module instances if you want:
GlkOte.init({ Dialog: new DialogClass() });
If you don't, the instance will create its own module instances where needed.
Each class has two new methods:
inited(): Returns whether the instance has been succcessfully inited.
getlibrary(val): Returns the associated module instance by name.
For example, GlkOte.getlibrary('Dialog') will return the Dialog instance being used by that GlkOte instance. Glk.getlibrary('GlkOte') will return the GlkOte being used by that Glk API instance. And so on.
When implementing higher-level modules, it's generally cleaner to fetch low-level modules using getlibrary() rather than trying to cache a reference at init() time. (Init order is a pain in the butt.)
I have gone through and done this for all the modules in the glkote repo. Quixe is not yet done.
Why an init method rather than passing in the references to the constructor?
One, that's the way it works now, and I don't want to mess around with it too much. (window.GlkOte is provided as a constructed instance which has not yet been initialized.) Two, instances need to be initialized with references to each other. (E.g. Dialog needs a reference to GlkOte and vice versa.) So you need to construct them both, then initialize them.
(I see I forgot Dialog.getlibrary(), oops.)
Ahh, I hadn't thought there were any circular references, but that's because I had changed Dialog to use console.log instead of GlkOte.log.
The plan that got implemented in 2020 seems to be doing the job.
|
2025-04-01T06:38:36.126011
| 2018-06-06T20:17:16
|
330016838
|
{
"authors": [
"BenDol",
"kie-ci",
"kiereleaseuser2",
"tiagobento"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:5697",
"repo": "errai/errai",
"url": "https://github.com/errai/errai/pull/347"
}
|
gharchive/pull-request
|
ERRAI-1111: Fix a bug with interrupting a navigation hide.
This will properly allow us to interrupt the hiding process in navigation.
Can one of the admins verify this PR? Comment with 'ok to test' to start the build.
Are we able to progress this issue?
@BenDol Can you provide a more detailed description of the issue?
So when I rewrote the navigation to support more complex use cases one of them was that we have the ability to interrupt the hiding navigation control inside of onHiding(NavigationControl) when you call interrupt on this NavigationControl object you expect it to retain the page you are currently on, but since the navigation process often has already started taking place the URL state will in most cases already be updated. So what this ensures is that the previous state of the page is restored after the hiding is interrupted.
We use this when we want to protect sensitive data input for example, so if a user attempts to click off the page we have the ability to properly cancel that navigation in the onHiding.
ok to test
Jenkins, please retest this.
@BenDol Could you please add some tests to the behaviours this PR adds?
Thanks!
Can one of the admins verify this PR? Comment with 'ok to test' to start the build.
Build finished. No test results found.
Build finished. 2762 tests run, 5 skipped, 0 failed.
Build finished. 2763 tests run, 7 skipped, 0 failed.
|
2025-04-01T06:38:36.134945
| 2023-05-11T17:31:39
|
1706255600
|
{
"authors": [
"aleciavogel",
"erseco"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:5698",
"repo": "erseco/alpine-moodle",
"url": "https://github.com/erseco/alpine-moodle/pull/28"
}
|
gharchive/pull-request
|
Add mailhog support to docker-compose.yml example
This took me longer than I'd like to admit to figure out how to properly integrate MailHog into one of my own projects' dev environments, so I thought I'd save someone the time and add MailHog to the example docker-compose.yml.
MailHog will catch all outgoing mail from Moodle so that you can easily debug/troubleshoot without worrying about actually sending emails or accidentally exposing your SMTP credentials in your codebase.
Also fixed a merge conflict in the README.
Hi @aleciavogel,
Thanks for your contribution! The addition of MailHog support to the example docker-compose.yml is a thoughtful enhancement. This will undoubtedly be a great help in debugging and troubleshooting, especially in preventing accidental exposure of SMTP credentials.
I appreciate the time you took to integrate MailHog into the alpine-moodle project and for taking a step further to share this with the community. Your effort to resolve the merge conflict in the README is also recognized and appreciated.
Before we merge this, I'll run some tests to ensure everything works as expected. I'll get back to you soon.
Thanks again for your contribution!
Best
Hi @aleciavogel,
Thanks again for your valuable contribution. However, after reviewing the MailHog project, it seems to be inactive, as indicated in this issue: https://github.com/mailhog/MailHog/issues/442. This could potentially lead to support and maintenance issues down the line.
Considering this, what are your thoughts on integrating "maildev" instead, as suggested in the aforementioned issue? You can find more about it here: https://maildev.github.io/maildev/. It appears to be actively maintained and could serve the same purpose effectively.
Please let me know your thoughts on this proposed change.
Thanks again for your input and looking forward to your response.
Best
Hey Ernesto,
Thank you for your consideration and thoughtful responses! MailHog has always been my go-to but MailDev sure looks neat. I'll revise my PR to use MailDev instead!
Unfortunately, I can't seem to get MailDev to work with your image and it's not immediately apparent as to why it's not working. I've consulted the issues for the MailDev repo to see if anyone has encountered something similar. I've tried switching between "tls" and "tcp" for the protocol env variable in the moodle service to no avail, as well as setting incoming and outgoing usernames and passwords for MailDev. Even if I don't get an error upon sending the "Lost your password?" email, it never shows up in the MailDev UI.
Feel free to take a crack at it
|
2025-04-01T06:38:36.215001
| 2016-12-28T06:57:03
|
197805994
|
{
"authors": [
"cengizIO",
"ersinerdal"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:5699",
"repo": "ersinerdal/react-redux-immutable-ddd",
"url": "https://github.com/ersinerdal/react-redux-immutable-ddd/issues/1"
}
|
gharchive/issue
|
Add License
Hello
Congrats!
As any open source project, we must declare our licensing policy.
Done! Thanx.
|
2025-04-01T06:38:36.219234
| 2014-11-25T04:54:13
|
49976751
|
{
"authors": [
"erusev",
"rhukster"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:5700",
"repo": "erusev/parsedown-extra",
"url": "https://github.com/erusev/parsedown-extra/issues/25"
}
|
gharchive/issue
|
Issues with multiple block-level elements in definition lists
As tested on the parsedown demo: http://parsedown.org/demo?extra=1
this code does not render as it should within the definition list:
Term 1
: This is a definition with two paragraphs. Lorem ipsum
dolor sit amet, consectetuer adipiscing elit. Aliquam
hendrerit mi posuere lectus.
Vestibulum enim wisi, viverra nec, fringilla in, laoreet
vitae, risus.
: Second definition for term 1, also wrapped in a paragraph
because of the blank line preceding it.
Term 2
: This definition has a code block, a blockquote and a list.
code block.
> block quote
> on two lines.
1. first list item
2. second list item
For reference: https://michelf.ca/projects/php-markdown/extra/#def-list
Thought I would provide some visuals:
Thanks.
I'll look into this as soon as I resolve #4.
Sweet! Thanks for fixing this!
|
2025-04-01T06:38:36.221673
| 2024-07-31T11:59:59
|
2439873848
|
{
"authors": [
"stonechoe"
],
"license": "BSD-3-Clause",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:5701",
"repo": "es-meta/esmeta",
"url": "https://github.com/es-meta/esmeta/pull/252"
}
|
gharchive/pull-request
|
Use fixed number of threads in test262-test concurrent mode
This PR makes following changes:
Change -test262-test:concurrent option to be number.
Use fixed number of threads in test262-test concurrent mode.
It is expected to resolve issue #251 (if proper number of threads is given).
Now concurrent and timeout options work well together.
esmeta test262-test -test262-test:progress -test262-test:log \
-test262-test:concurrent=16 -test262-test:timeout=60
# ...
100.00% (48,376/48,376) - P:N = 25,276:23,100 => P/P = 25,276/25,276 (100.00%) [07:58]
# ...
- pass-rate: P/P = 25,276/25,276 (100.00%)
$ esmeta test262-test -test262-test:progress -test262-test:log -test262-test:timeout=60 -test262-test:concurrent=16
# ....
100.00% (48,376/48,376) - P:N = 25,276:23,100 => P/P = 25,276/25,276 (100.00%) [07:52]
|
2025-04-01T06:38:36.223573
| 2021-12-18T23:33:11
|
1083966344
|
{
"authors": [
"glencoe"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:5702",
"repo": "es-ude/elastic-ai.creator",
"url": "https://github.com/es-ude/elastic-ai.creator/issues/7"
}
|
gharchive/issue
|
get rid of QConv1d and co.
With pytorch 3.10 the concept of parametrization was introduced... E.g. A module behaving equivalently to QConv1d with Binarize can be built like this
import torch
from torch.nn import Conv1d
from torch.nn.utils.parametrize import register_parametrization
from elasticai.creator.layers import Binarize
layer = Conv1d(in_channels=2,
out_channels=3,
kernel_size=(1,),
bias=False)
register_parametrization(layer, "weight", Binarize())
Therefore ~neither~ the implementations of our quantizable convolutions ~nor of our qlstm cells~ should [not] be needed anymore.
If we decide to still keep the implementations we should implement them with the help of parametrization.
edited description: In fact from what i can tell there is no easy way to set custom activations to be used in the RNN layers, so we still require a custom implementation to realize something like our QLSTM
|
2025-04-01T06:38:36.243811
| 2023-04-27T23:46:07
|
1687687679
|
{
"authors": [
"bfeist",
"eshaan7"
],
"license": "BSD-3-Clause",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:5703",
"repo": "eshaan7/Flask-Shell2HTTP",
"url": "https://github.com/eshaan7/Flask-Shell2HTTP/issues/53"
}
|
gharchive/issue
|
Seeing intermittent future_key errors
2023-04-25 23:12:39 ERROR:flask_shell2http:future_key ebbb407f already exists 2023-04-25 23:12:39 ERROR:flask_shell2http:No report exists for key: 'ebbb407f'.
These are interspersed with working calls.
I turned off wait=true and switched to polling. This reduced the problem but didn't eliminate it.
Are there any docs on how to endure the key doesn't already exist? I don't believe I'm managing the keys externally to shell2http.
Thanks so much.
Can you also tell if your clienttries to execute same command with the same args multiple times? If that is the case, you might wnat to set the force_unique_key parameter to true (see example).
Ah this explains it. Yes, several calls with the same parameters are possible. I'll have a look at how to force_unique_key. Thanks for the followup.
|
2025-04-01T06:38:36.257271
| 2024-01-04T07:10:09
|
2065123733
|
{
"authors": [
"AnnAngela",
"nzakas"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:5704",
"repo": "eslint-community/eslint-plugin-security",
"url": "https://github.com/eslint-community/eslint-plugin-security/issues/136"
}
|
gharchive/issue
|
Bug: false positive for security/detect-object-injection
What version of eslint-plugin-security are you using?
2.1.0
ESLint Environment
Node version: v20.10.0
npm version: v10.2.3
Local ESLint version: 8.56.0
Global ESLint version: Not found
Operating System: linux 6.2.0-1018-azure
What parser are you using?
Default (Espree)
What did you do?
minimal reproduction repo: https://github.com/AnnAngela/eslint-plugin-security-rules-detect-object-injection
What did you expect to happen?
Nothing reported.
What actually happened?
https://github.com/AnnAngela/eslint-plugin-security-rules-detect-object-injection/actions/runs/7406514870/job/20151079711#step:6:7
Participation
[ ] I am willing to submit a pull request for this issue.
Additional comments
According to the docs, I did not do any value assignment and the warning should not be reported.
From what I can tell, this rule is behaving as expected and the documentation needs updating. It currently flags any function call for which an argument in the form object[key] is passed. An assignment isn't necessary, especially because, in your case, foo is being read from the environment.
@nzakas THX but can you explain a bit more clearly why the assignment isn't necessary?
I don't understand that why the obj[foo] would cause harm even though foo is "construct" or other special string.
|
2025-04-01T06:38:36.262062
| 2023-01-06T18:28:48
|
1522976742
|
{
"authors": [
"amareshsm",
"nzakas"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:5705",
"repo": "eslint/eslint.org",
"url": "https://github.com/eslint/eslint.org/issues/397"
}
|
gharchive/issue
|
Bug: Local development broken
URL(s)
(All)
What did you do?
Tried writing a new blog post with a date in the future.
What did you expect to happen?
I expected the blog post to be visible at https://localhost:2022/blog. We had this set up so that future posts would be shown locally to make it easier to debug, but then would not be shown on the live site.
I also expected the blog to update whenever I edit the Markdown file.
What actually happened?
The blog post does not show up on the blog page, and even if I hack it so that it does show up, the blog post is not being watched for changes so I need to stop and restart the server just to see changes.
Participation
[ ] I am willing to submit a pull request for this issue.
Additional comments
There were a bunch of changes made to package.json that I believe are causing both of these issues.
For some reason, the environment variable CONTEXT is now being set here:
"watch:eleventy": "cross-env CONTEXT=dev eleventy --serve --port=2022",
However, we specifically expect no CONTEXT to determine whether or not to show future blog posts.
I'm not sure why watching isn't working otherwise. It appears to work for .js files but it does not work for .md files.
https://github.com/eslint/eslint.org/blob/427e3bc27aff8e5189b240cb36187234d8281d63/package.json#L20
Renaming the CONTEXT env fixes this issue.
|
2025-04-01T06:38:36.307185
| 2021-01-09T05:03:52
|
782514368
|
{
"authors": [
"TheEssem",
"adroitwhiz"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:5706",
"repo": "esmBot/esmBot",
"url": "https://github.com/esmBot/esmBot/pull/49"
}
|
gharchive/pull-request
|
Clean up image API code paths
Most commits have individual descriptions, but at a high level I'm essentially getting rid of the complex mutually-recursing-across-processes behavior previously present in image.js. I've moved the part that actually calls into ImageMagick into its own file, so instead of commands calling magick.run which calls the image API which calls magick.run again (or magick.run calling itself in a worker thread if the image API is disabled), magick.run and the image API both delegate to the new "call ImageMagick" function.
I've also gotten rid of some duplicate code for handling GIFs, removed some dead code relating to image types (it can be re-added later in a much cleaner way if needed), and fixed a bug where GIF-only commands would throw an internal error instead of displaying the intended "that isn't a GIF!" message.
When testing this out on my instance, something happened where it pulled the previous image instead of the current one when running the motivate command. No idea how these changes could have caused that (or if they even did) but I'll look into it.
When testing this out on my instance, something happened where it pulled the previous image instead of the current one when running the motivate command. No idea how these changes could have caused that (or if they even did) but I'll look into it.
Never mind, it seems to be an issue with the dev bot as well.
Never mind, it seems to be an issue with the dev bot as well.
Looks pretty good.
Looks pretty good.
|
2025-04-01T06:38:36.327579
| 2023-09-21T13:16:22
|
1906962973
|
{
"authors": [
"bjoernQ",
"jessebraham"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:5707",
"repo": "esp-rs/esp-hal",
"url": "https://github.com/esp-rs/esp-hal/issues/806"
}
|
gharchive/issue
|
Add support for MCUboot
MCUboot support was added for the ESP32-C3 in #49, we should continue adding support for additional devices. There is interest in this from other teams within Espressif currently, and as such it is likely a useful feature for some community members as well.
See esp32c3_hal/src/lib.rs as a reference. Note that additional changes to linker scripts are required, too.
[ ] ESP32
[ ] ESP32-C2
[ ] ESP32-C3
[ ] ESP32-C6
[ ] ESP32-H2
[ ] ESP32-S2
[ ] ESP32-S3
In the meantime, support for ESP32-C3 was removed
I'm going to close this for now, as we have no plans on working on this any time soon. We can open a new issue if we decide to re-visit this.
|
2025-04-01T06:38:36.630575
| 2024-09-11T07:06:02
|
2518747758
|
{
"authors": [
"peterdragun"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:5708",
"repo": "espressif/clang-tidy-runner",
"url": "https://github.com/espressif/clang-tidy-runner/pull/49"
}
|
gharchive/pull-request
|
feat: Drop support for ESP-IDF 4.4 and Python 3.6
Description
This MR includes several fixes for the pipeline to pass:
The codereport maintainer has merged my MR (https://github.com/paulgessinger/codereport/pull/4) and released a new version 0.4.0. This should resolve dependency conflicts mentioned in: https://github.com/espressif/clang-tidy-runner/pull/46#issuecomment-2258096149
Update the version of actions because we were using a deprecated version, which does not work anymore. Failed job: https://github.com/espressif/clang-tidy-runner/actions/runs/10806704544/job/29976026469
Drop support for ESP-IDF 4.4 (which was the last to support Python 3.6 so this was dropped as well)
Reasons to drop ESP-IDF 4.4
TLDR: dependency hell
I was trying to make the pipeline work on the latest IDF, but there was a conflict with the pygments package because IDF required >=3.13. The requirement is coming from the codereport package, so I fixed that upstream and wanted to update the version here to match at least that version (codereport>=0.4.0).
Then I realized that we are still stuck with codereport version 0.2.5, because of Jinja2. The newest versions of codereport (3.1+) require Jinja2==3.1.1, but ESP-IDF has this requirement set to <3.1, so there was no way to satisfy both ESP-IDF 4.4 and the latest versions.
Considering that ESP-IDF 4.4 is not supported anymore, this is IMO the best solution.
This is now working in the CI with IDF 5.0+, but if we hit some dependency issue again we should remove the dependency on the codereport package, there is not a lot of code, mostly just templates for HTML, so we should consider implementing this ourselves.
Related
Internal tracker: IDF-10919
@dobairoland PTAL, this should be ready to merge.
|
2025-04-01T06:38:36.638853
| 2017-09-25T02:23:15
|
260138030
|
{
"authors": [
"lucazader",
"negativekelvin",
"projectgus"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:5709",
"repo": "espressif/esp-idf",
"url": "https://github.com/espressif/esp-idf/issues/1035"
}
|
gharchive/issue
|
BLE Notifications stop working after SPI init
Hi All,
I have a weird one for you.
I have a program that communicates over BLE (peripheral and central, connected as a peripheral) to a computer. The computer sends some commands and then sends a message to enable/initialise the SPI Master.
Before the SPI is enabled, the notifications work well, as expected. After the SPI is initialised (by calling spi_bus_initialize and spi_bus_add_device) the notifications cease to work.
I have discovered that the esp_ble_gatts_send_indicate function returns ESP_FAIL and with a bit of modification to the esp_ble_gatts_send_indicate function I seem to be getting the BT_STATUS_NOMEM error.
I have checked the amount of free heap to be above 80K at all times.
I have also tried increasing the BT task stack size to 8192. This didn't help.
The strange part that I mentioned earlier is that this only seems to happen when I build on windows (with the latest pre-compiled toolchain).
If I build with the exact same source code (IDF at the same commit with no changes, and Project at the same commit with no changes) on fedora (with the latest pre-compiled toolchain), the program works fine. All notifications are sent as expected, no BT_STATUS_NOMEM occurs.
We have tried to get to the bottom of this for a couple of days now to no success.
Is there anything that could be causing this?
Please let me know if there is any more information that you would like.
@projectgus @igrr
Suggest that you post the two sets of bin/elf files
I've seen bugs that appear on certain machines only due to memory corruption of static data combined with the build order & layout (ie on different systems the object files are linked in a different order, leads to different order of static memory addresses in RAM). So for some builds the memory corruption (buffer overflow, etc.) corrupts something harmless or lands in padding, but for other builds it may break something critical.
There is an item in our ticketing system to make IDF builds more reproducable to avoid this kind of phantom problem, but there are some technical sticking points before we can achieve this.
Unfortunately the heap debugging features don't extend to static memory, so if this is indeed static memory being corrupted then they're not useful. But you could try enabling heap poisoning and calling heap_caps_check_integrity() anyhow, just in case:
https://esp-idf.readthedocs.io/en/latest/api-reference/system/heap_debug.html#configuration
You can also manually look at the linker map files or symbol dumps (via objdump) from each of the ELF files, and look for anything which might stick out.
Hi all,
Thanks for your responses.
It seems to have been related to a globally declared variable that was not declared as static. It seems to have a name the same as found in lots of the ble stack (ret).
Declaring this variable locally in a function or adding static to the global declaration fixed the issue.
Hi @lucazader,
Glad you sorted this out.
It seems to have been related to a globally declared variable that was not declared as static. It seems to
have a name the same as found in lots of the ble stack (ret).
If there's a part of IDF that includes a globally declared variable with a generic name like "ret" then this is also a bug which we should fix. I had a quick grep of the BT stack code and can't see any global symbol named "ret" (lots of local variables using this name). If you think there may be such a bug here then please reopen the issue.
Hi @projectgus
It was a global variable called "ret" in my code, however it seemed to conflict with the local ret variables, or at least one of them.
Not 100% sure what was going on. l But it definitely was to do with that variable.
|
2025-04-01T06:38:36.668614
| 2024-01-29T16:44:20
|
2105921617
|
{
"authors": [
"andylinpersonal",
"igrr"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:5710",
"repo": "espressif/esp-idf",
"url": "https://github.com/espressif/esp-idf/issues/13072"
}
|
gharchive/issue
|
Lacking double-checked locking optimization leads to significantly slower code for static local variables on RISC-V (IDFGH-12004)
Answers checklist.
[X] I have read the documentation ESP-IDF Programming Guide and the issue is not addressed there.
[X] I have updated my IDF branch (master or release) to the latest version and checked that the issue is present there.
[X] I have searched the issue tracker for a similar issue and not found a similar issue.
IDF version.
release/v5.1 and master
Espressif SoC revision.
esp32c6: v0.1, esp32c3: v0.4
Operating System used.
Linux
How did you build your project?
Command line with idf.py
If you are using Windows, please specify command line type.
None
Development Kit.
esp32c3: custom | esp32c6: esp32-c6-devkitc-1-n8 | esp32s3: custom
Power Supply used.
USB
What is the expected behavior?
C++11 standard requires the thread-safe, on-demand initialization of local static variables N2660, so GCC introduce a guard variable and guard functions to protect the underlying static variable.
When first time control passes through their declaration, the guard functions __cxa_guard_* are called and ensure the successful initialization must be performed exactly once.
The guard variable records the current state of the local static variable.
Guard functions require some synchronization mechanisms to work, so they're somewhat heavy.
GCC introduce another inlined check (double-checked locking optimization) to bypass the guard functions after the local static variable successfully initialized.
Problem: Some targets may lack the double-checked locking optimization, and the guard functions are always called no matter whether the local static variable is initialized or not.
What is the actual behavior?
Accessing to the local static variable on some targets/gcc combination are significantly slower than others.
After a bit of disassembling, I found the double-checked locking optimization is missing from all RISC-V targets on GCC 12.2, and still missing from RV32IMC targets on GCC 13.2.
Affected targets:
IDF ver.
GCC ver.
C2
C3
C6
H2
P4
S3 (XT)
release/v5.1
riscv32-esp-elf/esp-12.2.0_20230208
+
+
+
+
x
-
master
riscv32-esp-elf/esp-13.2.0_20230928
+
+
-
-
-
-
'+' for affected, '-' for not affected, 'x' for not supported.
S3 is an Xtensa target, not affected by this problem. Just for comparison.
Summary of the implemented ISA extensions for each RISC-V target:
Target
Implemented ISA Ext.
ESP32-C2
rv32imc_zicsr_zifencei
ESP32-C3
rv32imc_zicsr_zifencei
ESP32-C6
rv32imac_zicsr_zifencei
ESP32-H2
rv32imac_zicsr_zifencei
ESP32-P4
rv32imafc_zicsr_zifencei
-march parameters were taken from the particular toolchain files.
Steps to reproduce.
Use the default config for the each target.
Design four test functions in the main/main.cpp.
Normal testing function for file-scoped static variable as baseline. main.cpp: rand_global_static()
Normal testing function for local static variable (testing function). main.cpp: rand_local_static()
Handcrafted testing function of the correctly implemented version of 2. main.cpp: rand_opt_handcraft_local_static()
Handcrafted testing function of the affected version of 2. main.cpp: rand_naive_handcraft_local_static()
Benchmark all of them.
For the expected code, performance of function 1.1 should be on par with 2.1 (Target with -march=rv32imac_zicsr_zifencei and compiled with GCC 13.2)
For the affected code, performance of function 1.1 should be similar to 2.4(Target with -march=rv32imc_zicsr_zifencei and compiled with GCC 13.2 and 12.2)
Debug Logs.
* Only C3 and C6 have been tested on the real machine. Followings are some results for {c3, c6} x {release/v5.1, master}.
* Benchmark results
* Targets with affected codegen
* ESP32-C3, release/v5.1
I (135181) local-static: native local static duration: 266015 \
I (135181) local-static: optimized handcrafted local static duration: 13537 |
I (135181) local-static: naive handcrafted local static duration: 266021 /
I (135191) local-static: global static duration: 9031
I (135201) local-static: penalty of local static: 96.605%
```
* ESP32-C3, master
```
I (26840) local-static: native local static duration: 202079 \
I (26840) local-static: optimized handcrafted local static duration: 14353 |
I (26850) local-static: naive handcrafted local static duration: 201665 /
I (26860) local-static: global static duration: 9440
I (26860) local-static: penalty of local static: 95.329%
```
* ESP32-C6, release/v5.1
```
I (58125) local-static: native local static duration: 272966 \
I (58125) local-static: optimized handcrafted local static duration: 13946 |
I (58135) local-static: naive handcrafted local static duration: 272560 /
I (58145) local-static: global static duration: 9031
I (58145) local-static: penalty of local static: 96.692%
```
* Target with expected result:
* ESP32-C6, master
```
I (62305) local-static: native local static duration: 13944 \
I (62305) local-static: optimized handcrafted local static duration: 13535 /
I (62315) local-static: naive handcrafted local static duration: 199632
I (62315) local-static: global static duration: 9030
I (62325) local-static: penalty of local static: 35.241%
```
* Disassembly of the expected code:
* ESP32-C6, master
```
420082b8 <rand_local_static()>:
{
420082b8: 1141 add sp,sp,-16
420082ba: c606 sw ra,12(sp)
static prng_t s_pv_rng;
420082bc: 4080c7b7 lui a5,0x4080c
420082c0: 4e87c783 lbu a5,1256(a5) # 4080c4e8 <guard variable for rand_local_static()::s_pv_rng>
// Here's a fence to safely load the ready flag from the guard variable.
420082c4: 0ff0000f fence
420082c8: 0ff7f793 zext.b a5,a5
// Double-checked to bypass the heavy __cxa_guard_* guard functions.
420082cc: /-- cb85 beqz a5,420082fc <rand_local_static()+0x44>
_M_x = __detail::__mod<_UIntType, __m, __a, __c>(_M_x);
420082ce: /--|-> 4080c737 lui a4,0x4080c
420082d2: | | 4f072503 lw a0,1264(a4) # 4080c4f0 <rand_local_static()::s_pv_rng>
_Tp __res = __a * __x + __c;
420082d6: | | 41c657b7 lui a5,0x41c65
420082da: | | e6d78793 add a5,a5,-403 # 41c64e6d <g_saved_pc+0x13e4e71>
420082de: | | 02f50533 mul a0,a0,a5
420082e2: | | 678d lui a5,0x3
420082e4: | | 03978793 add a5,a5,57 # 3039 <RvExcFrameSize+0x2fa5>
420082e8: | | 953e add a0,a0,a5
__res %= __m;
420082ea: | | 800007b7 lui a5,0x80000
420082ee: | | 17fd add a5,a5,-1 # 7fffffff <LP_ANA_PERI+0x1ff4d3ff>
420082f0: | | 8d7d and a0,a0,a5
_M_x = __detail::__mod<_UIntType, __m, __a, __c>(_M_x);
420082f2: | | 4ea72823 sw a0,1264(a4)
}
420082f6: | | 40b2 lw ra,12(sp)
420082f8: | | 0141 add sp,sp,16
420082fa: | | 8082 ret
static prng_t s_pv_rng;
420082fc: | \-> 4080c537 lui a0,0x4080c
42008300: | 4e850513 add a0,a0,1256 # 4080c4e8 <guard variable for rand_local_static()::s_pv_rng>
42008304: | d24fc0ef jal 42004828 <__cxa_guard_acquire>
42008308: +----- d179 beqz a0,420082ce <rand_local_static()+0x16>
{ seed(__s); }
4200830a: | 4585 li a1,1
4200830c: | 4080c537 lui a0,0x4080c
42008310: | 4f050513 add a0,a0,1264 # 4080c4f0 <rand_local_static()::s_pv_rng>
42008314: | f99ff0ef jal 420082ac <std::linear_congruential_engine<unsigned int,<PHONE_NUMBER>u, 12345u,<PHONE_NUMBER>u>::seed(unsigned int)>
42008318: | 4080c537 lui a0,0x4080c
4200831c: | 4e850513 add a0,a0,1256 # 4080c4e8 <guard variable for rand_local_static()::s_pv_rng>
42008320: | de2fc0ef jal 42004902 <__cxa_guard_release>
42008324: \----- b76d j 420082ce <rand_local_static()+0x16>
```
* ESP32-C3, release/v5.1 (handcrafted version of above-mentioned compiler-generated code)
```
420077a6 <rand_opt_handcraft_local_static()>:
{
420077a6: 1141 add sp,sp,-16
420077a8: c606 sw ra,12(sp)
uint8_t r = s_rng_guard_optimized.ready;
420077aa: 3fc8c7b7 lui a5,0x3fc8c
420077ae: 76c7c783 lbu a5,1900(a5) # 3fc8c76c <s_rng_guard_optimized>
__sync_synchronize();
420077b2: 0ff0000f fence
if (!r) {
420077b6: /-- cb8d beqz a5,420077e8 <rand_opt_handcraft_local_static()+0x42>
_M_x = __detail::__mod<_UIntType, __m, __a, __c>(_M_x);
420077b8: /--|-> 3fc8c737 lui a4,0x3fc8c
420077bc: | | 76872503 lw a0,1896(a4) # 3fc8c768 <s_rng_optimized>
_Tp __res = __a * __x + __c;
420077c0: | | 41c657b7 lui a5,0x41c65
420077c4: | | e6d78793 add a5,a5,-403 # 41c64e6d <_coredump_iram_end+0x18da66d>
420077c8: | | 02f50533 mul a0,a0,a5
420077cc: | | 678d lui a5,0x3
420077ce: | | 03978793 add a5,a5,57 # 3039 <_esp_memprot_align_size+0x2e39>
420077d2: | | 953e add a0,a0,a5
__res %= __m;
420077d4: | | 800007b7 lui a5,0x80000
420077d8: | | fff7c793 not a5,a5
420077dc: | | 8d7d and a0,a0,a5
_M_x = __detail::__mod<_UIntType, __m, __a, __c>(_M_x);
420077de: | | 76a72423 sw a0,1896(a4)
}
420077e2: | | 40b2 lw ra,12(sp)
420077e4: | | 0141 add sp,sp,16
420077e6: | | 8082 ret
if (__cxxabiv1::__cxa_guard_acquire((__cxxabiv1::__guard *)&s_rng_guard_optimized)) {
420077e8: | \-> 3fc8c537 lui a0,0x3fc8c
420077ec: | 76c50513 add a0,a0,1900 # 3fc8c76c <s_rng_guard_optimized>
420077f0: | c34fd0ef jal 42004c24 <__cxa_guard_acquire>
420077f4: +----- d171 beqz a0,420077b8 <rand_opt_handcraft_local_static()+0x12>
{ seed(__s); }
420077f6: | 4585 li a1,1
420077f8: | 3fc8c537 lui a0,0x3fc8c
420077fc: | 76850513 add a0,a0,1896 # 3fc8c768 <s_rng_optimized>
42007800: | ef9ff0ef jal 420076f8 <std::linear_congruential_engine<unsigned int,<PHONE_NUMBER>u, 12345u,<PHONE_NUMBER>u>::seed(unsigned int)>
__cxxabiv1::__cxa_guard_release((__cxxabiv1::__guard *)&s_rng_guard_optimized);
42007804: | 3fc8c537 lui a0,0x3fc8c
42007808: | 76c50513 add a0,a0,1900 # 3fc8c76c <s_rng_guard_optimized>
4200780c: | cf4fd0ef jal 42004d00 <__cxa_guard_release>
42007810: \----- b765 j 420077b8 <rand_opt_handcraft_local_static()+0x12>
```
* Disassembly of the affected ()```:
* ESP32-C3, release/v5.1
* ESP32-C6, release/v5.1 (disassembly for these two targets are almost identical)
* ESP32-C3, master (only slightly different from the previous one, omitted)
```
42007706 <rand_local_static()>:
{
42007706: 1141 add sp,sp,-16
42007708: c606 sw ra,12(sp)
static prng_t s_pv_rng;
4200770a: 3fc8c537 lui a0,0x3fc8c
4200770e: 77050513 add a0,a0,1904 # 3fc8c770 <guard variable for rand_local_static()::s_pv_rng>
// Always call the guard function without checking
42007712: d12fd0ef jal 42004c24 <__cxa_guard_acquire>
42007716: /-- e90d bnez a0,42007748 <rand_local_static()+0x42>
_M_x = __detail::__mod<_UIntType, __m, __a, __c>(_M_x);
42007718: /--|-> 3fc8c737 lui a4,0x3fc8c
4200771c: | | 77872503 lw a0,1912(a4) # 3fc8c778 <rand_local_static()::s_pv_rng>
_Tp __res = __a * __x + __c;
42007720: | | 41c657b7 lui a5,0x41c65
42007724: | | e6d78793 add a5,a5,-403 # 41c64e6d <_coredump_iram_end+0x18da66d>
42007728: | | 02f50533 mul a0,a0,a5
4200772c: | | 678d lui a5,0x3
4200772e: | | 03978793 add a5,a5,57 # 3039 <_esp_memprot_align_size+0x2e39>
42007732: | | 953e add a0,a0,a5
__res %= __m;
42007734: | | 800007b7 lui a5,0x80000
42007738: | | fff7c793 not a5,a5
4200773c: | | 8d7d and a0,a0,a5
_M_x = __detail::__mod<_UIntType, __m, __a, __c>(_M_x);
4200773e: | | 76a72c23 sw a0,1912(a4)
}
42007742: | | 40b2 lw ra,12(sp)
42007744: | | 0141 add sp,sp,16
42007746: | | 8082 ret
{ seed(__s); }
42007748: | \-> 4585 li a1,1
4200774a: | 3fc8c537 lui a0,0x3fc8c
4200774e: | 77850513 add a0,a0,1912 # 3fc8c778 <rand_local_static()::s_pv_rng>
42007752: | fa7ff0ef jal 420076f8 <std::linear_congruential_engine<unsigned int,<PHONE_NUMBER>u, 12345u,<PHONE_NUMBER>u>::seed(unsigned int)>
static prng_t s_pv_rng;
42007756: | 3fc8c537 lui a0,0x3fc8c
4200775a: | 77050513 add a0,a0,1904 # 3fc8c770 <guard variable for rand_local_static()::s_pv_rng>
4200775e: | da2fd0ef jal 42004d00 <__cxa_guard_release>
42007762: \----- bf5d j 42007718 <rand_local_static()+0x12>
```
More Information.
Workarounds:
constinit keyword for the type with constexpr constructor and default destructor.
Use file-scoped static variable instead rand_global_static();
Testing code:
CMakeLists.txt
cmake_minimum_required(VERSION 3.16)
include($ENV{IDF_PATH}/tools/cmake/project.cmake)
project(static-local-variable-test)
main/CMakeLists.txt
idf_component_register(SRCS "main.cpp")
main/main.cpp
#include "esp_log.h"
#include "freertos/FreeRTOS.h"
#include "freertos/task.h"
#include <chrono>
#include <cxxabi.h>
#include <numeric>
#include <random>
// Faster if using power of two. Make cpu busy for a while.
using prng_t = std::linear_congruential_engine<unsigned,<PHONE_NUMBER>, 12345, 1u << 31>;
static const char TAG[] = "local-static";
prng_t g_rng;
int rand_global_static()
{
return g_rng();
}
/** @brief Normal local static getter */
int rand_local_static()
{
static prng_t s_pv_rng;
return s_pv_rng();
}
/**
* @brief ABI-defined guard variable for local static variable and lock functions
* @note Defined in the ${IDF_PATH}/components/cxx/cxx_guards.cpp
*/
typedef struct {
uint8_t ready;
uint8_t pending;
} guard_t;
static guard_t s_rng_guard_optimized = {0, 0};
static prng_t s_rng_optimized;
int rand_opt_handcraft_local_static()
{
#ifdef __xtensa__
// Only S3 has this extra fence
__sync_synchronize();
#endif
uint8_t r = s_rng_guard_optimized.ready;
__sync_synchronize();
if (!r) {
if (__cxxabiv1::__cxa_guard_acquire((__cxxabiv1::__guard *)&s_rng_guard_optimized)) {
new (&s_rng_optimized) prng_t;
__cxxabiv1::__cxa_guard_release((__cxxabiv1::__guard *)&s_rng_guard_optimized);
}
}
return s_rng_optimized();
}
static guard_t s_rng_guard_naive = {0, 0};
static prng_t s_rng_naive;
int rand_naive_handcraft_local_static()
{
if (__cxxabiv1::__cxa_guard_acquire((__cxxabiv1::__guard *)&s_rng_guard_naive)) {
new (&s_rng_naive) prng_t;
__cxxabiv1::__cxa_guard_release((__cxxabiv1::__guard *)&s_rng_guard_naive);
}
return s_rng_naive();
}
static constexpr size_t repeat = UINT16_MAX;
template <int (*Fn)(void)>
static unsigned test_runner()
{
unsigned randval = 0;
std::chrono::high_resolution_clock::time_point t1 = std::chrono::high_resolution_clock::now();
for (size_t i = 0; i < repeat; i++) {
randval += Fn();
}
std::chrono::high_resolution_clock::time_point t2 = std::chrono::high_resolution_clock::now();
unsigned dur = std::chrono::duration_cast<std::chrono::microseconds>(t2 - t1).count();
return dur;
}
static void test_task(void *)
{
static constexpr auto width = std::numeric_limits<unsigned>::digits10;
while (true) {
auto dur_l = test_runner<rand_local_static>();
auto dur_oh = test_runner<rand_opt_handcraft_local_static>();
auto dur_nh = test_runner<rand_naive_handcraft_local_static>();
auto dur_g = test_runner<rand_global_static>();
ESP_LOGI(TAG, "native local static duration: %*u", width, dur_l);
ESP_LOGI(TAG, "optimized handcrafted local static duration: %*u", width, dur_oh);
ESP_LOGI(TAG, "naive handcrafted local static duration: %*u", width, dur_nh);
ESP_LOGI(TAG, "global static duration: %*u", width, dur_g);
ESP_LOGI(TAG, "penalty of local static: %.3f%%", (1.0f - (float)dur_g / dur_l) * 100.0f);
vTaskDelay(pdMS_TO_TICKS(1000));
}
}
extern "C" void app_main()
{
#if portNUM_PROCESSORS > 1
const BaseType_t core = 1;
#else
const BaseType_t core = 0;
#endif
TaskHandle_t p_tsk;
assert(xTaskCreatePinnedToCore(&test_task, TAG, 4096, nullptr, CONFIG_ESP32_PTHREAD_TASK_PRIO_DEFAULT, &p_tsk,
core) == pdPASS);
}
Problem: Some targets may lack the double-checked locking optimization, and the guard functions are always called no matter whether the local static variable is initialized or not.
Unfortunately, I think this is probably expected, since the double-checked lock initialization using an atomic guard variable is only implemented in GCC on targets with support for atomic instructions (i.e. a extension on RISC-V):
Static initialization expansion relies on get_guard_cond (code)
get_guard_cond generates a constant zero expression if is_atomic_expensive_p is true (code)
is_atomic_expensive_p calls can_compare_and_swap_p with the 2nd argument (allow_libcall) set to false (code)
So if the target doesn't support atomics via instructions (only via library calls) then is_atomic_expensive_p will return true, and the atomic guard related code won't be generated.
I think this probably needs to be reported in upstream GCC as a "allow atomic libcalls for double-check guard implementation" type of a feature request.
Newer Espressif RISC-V chips should all have the "A" extension, so this probably won't be an issue going forward.
Hi igrr:
But even on the RV32IMAC targets (C6 and onward), it seems like the generated code doesn't utilize any instructions from the A extension at all?
That's right, the load operation generated by build_atomic_load_type is lbu, which isn't an atomic instruction. Since the "initialized" flag is represented by 1 byte, and we don't need to atomically modify the flag, it's not necessary to use lr/sc instructions there, so seems like the compiler is doing the right thing there.
Unfortunately, I don't know enough about other architectures which GCC targets to tell if there is a specific reason for using build_atomic_load_type there. I do see the same behavior for other architectures, though — https://godbolt.org/z/cz5cvabv8 illustrates the same issue on Xtensa. On ESP32-S2 (no atomic instructions) no double-checked locking is used, but on ESP32 (has atomic instructions) it is used.
|
2025-04-01T06:38:36.677153
| 2024-05-21T12:57:41
|
2308249487
|
{
"authors": [
"evoon",
"vik-gokhale"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:5711",
"repo": "espressif/esp-idf",
"url": "https://github.com/espressif/esp-idf/issues/13827"
}
|
gharchive/issue
|
NO_AP_FOUND_IN_AUTHMODE_THRESHOLD while threshold's condition is respected (IDFGH-12863)
Answers checklist.
[X] I have read the documentation ESP-IDF Programming Guide and the issue is not addressed there.
[X] I have updated my IDF branch (master or release) to the latest version and checked that the issue is present there.
[X] I have searched the issue tracker for a similar issue and not found a similar issue.
IDF version.
v5.2.1
Espressif SoC revision.
ESP32-C6 v0.0
Operating System used.
Windows
How did you build your project?
VS Code IDE
If you are using Windows, please specify command line type.
None
Development Kit.
ESP32-C6-WROOM-1
Power Supply used.
USB
What is the expected behavior?
I expect the ESP32 STA to connect to the AP, as the STA threshold is set to WIFI_AUTH_WPA2 and the AP's authmode is set to >WPA2.
What is the actual behavior?
I am getting NO_AP_FOUND_IN_AUTHMODE_THRESHOLD, while the authmode.threshold in the STA is greater than AP's one.
No matter what threshold i put in authmode.threshold, the only AP configuration that it will connect to is WPA3
Steps to reproduce.
Set station config's authmode.threshold to anything/don't set it
Connect to a wifi network with a security inferior to WPA3
Debug Logs.
SCAN RESULT : SSID=TestReseau, AUTHMODE=3
I (241251) http_server: POST /connect.json
I (241261) http_server: ssid: TestReseau, password: Password1!
I (241261) wifi_manager: MESSAGE: ORDER_CONNECT_STA
I (241271) wifi_manager: wifi_sta_config: ssid:TestReseau password:Password1!
I (241281) wifi_manager: wifi_sta_config: sta_authmode 3
I (241291) wifi_manager: wifi_sta_config: RM enabled 1
I (241291) wifi_manager: wifi_sta_config: BTM enabled 1
I (241301) wifi_manager: wifi_sta_config: MBO enabled 1
I (241311) wifi_manager: wifi_sta_config: FT enabled 1
I (241311) wifi_manager: wifi_sta_config: OWE enabled 1
I (241321) wifi_manager: wifi_sta_config: PMF capable enabled 1
I (241331) wifi_manager: wifi_sta_config: PMF required enabled 0
I (241341) wifi_manager: wifi_sta_config: transition_disable enabled 0
I (244171) wifi_manager: WIFI_EVENT_STA_DISCONNECTED
I (244171) wifi_manager: MESSAGE: EVENT_STA_DISCONNECTED with Reason code: 211
I (244181) wifi_manager: MESSAGE: EVENT_STA_DISCONNECTED with rssi: -128
I (244181) wifi_manager: Set STA IP String to: <IP_ADDRESS>
### More Information.
I have tried setting OWE to 1 in the STA config, and also setting transition_disable to 1 in the STA config.
Ultimately, the STA will only connect to the AP if the security of the AP is set to WPA3
Hi @evoon
Could you please enable WIFI debug print and share the log?
CONFIG_WPA_DEBUG_PRINT=y
CONFIG_MBEDTLS_DEBUG=y
CONFIG_MBEDTLS_DEBUG_LEVEL_VERBOSE=y
CONFIG_MBEDTLS_DEBUG_LEVEL=4
CONFIG_LOG_DEFAULT_LEVEL_DEBUG=y
CONFIG_LOG_DEFAULT_LEVEL=4
|
2025-04-01T06:38:36.680236
| 2016-12-09T13:20:48
|
194595336
|
{
"authors": [
"Spritetm",
"dantonets",
"dumarjo"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:5712",
"repo": "espressif/esp-idf",
"url": "https://github.com/espressif/esp-idf/issues/153"
}
|
gharchive/issue
|
Loading application from openOCD
Hi,
Is it possible to upload the firmware through GDB ? When I try it, I got this error from openOCD
Error: esp32.cpu0: xtensa_write_memory (line 1024): DSR (8020CC13) indicates DIR instruction generated an exception! Warn : esp32.cpu0: Failed writing 4096 bytes at address 0x3F400010
Regards
Jonathan
Not yet; OpenOCD is not aware that the application lives in flash and will try to write to the flash as if it's RAM. Remedying this is on our ToDo-list, but we haven't gotten around to this.
Hi,
Thanx for the Info.
Jonathan
Hi,
I got this problem as well. The extensa-gdb works OK when start from the command line. But it fails with this message when I try start it from eclipse (Neon.3) - it try write to the drom0_0_seg, and, obviously fails - it read-only memory. So, can somebody point me to the eclipse configuration for avoid this write?
Thanks.
Never mind, I find a fix for that ..Thanks.
|
2025-04-01T06:38:36.683061
| 2018-06-27T14:34:04
|
336256945
|
{
"authors": [
"me-no-dev",
"nicola-lunghi",
"projectgus"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:5713",
"repo": "espressif/esp-idf",
"url": "https://github.com/espressif/esp-idf/issues/2114"
}
|
gharchive/issue
|
using ccache with esp-idf
Hi all,
is possible to use ccache to speed up the build?
Thanks
Nicola Lunghi
totally you can ;) many of us already use it :)
ok i managed to create a directory called bin-ccache in the xtensa installation directory and link all the executables in bin to ccache
then I've added the xtensa bin-ccache folder to path before the bin folder itself.
and ccache works perfectly
Hi @nicola-lunghi ,
Just FYI, if you're still using the cmake branch then you should get ccache enabled automatically if it's on your PATH:
https://github.com/espressif/esp-idf/blob/feature/cmake/tools/cmake/idf_functions.cmake#L101
(But setting up links in the way you mention will also work - I have this set on my local system as well.)
Angus
|
2025-04-01T06:38:36.701819
| 2019-03-12T00:33:01
|
419746640
|
{
"authors": [
"dralves",
"ginkgm",
"igrr"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:5714",
"repo": "espressif/esp-idf",
"url": "https://github.com/espressif/esp-idf/issues/3162"
}
|
gharchive/issue
|
SPI slave broken by commit 58955a7 (IDFGH-702)
Environment
Development Kit: [ESP32-DevKitC|]
Using VSPI with IOMUX pins as described in https://github.com/espressif/esp-idf/blob/master/docs/en/api-reference/peripherals/spi_slave.rst
No dma
Init code:
host_(VSPI_HOST);
//...
spi_bus_config_t buscfg = {
.mosi_io_num = 23,
.miso_io_num = 19,
.sclk_io_num = 18,
.quadwp_io_num = -1,
.quadhd_io_num = -1
};
spi_slave_interface_config_t slvcfg = {
.spics_io_num = 5,
.flags = 0,
.queue_size = 1,
.mode = 0
};
//...
RETURN_NOT_OK(spi_slave_initialize(host_, &buscfg, &slvcfg, 0));
Problem Description
Using ESP32 as an SPI slave, running against a non-esp MCU master. As found by git bisect, before commit 58955a7 data was transmitted reliably, after that commit started getting errors on the master (see below for the types of errors). The anatomy of the error (1 or 2 bit shifts) seems to hint at timing problems.
Reverting the changes to spi_slave.c on top of master as of a few days ago (ebdcbe8c) makes the problem go away.
Expected Behavior
Data transmitted by the slave arrives at the master as sent.
Actual Behavior
Data is usually bit shifted by 1 bit (e.g. an xFE byte sent by the ESP32 slave becomes xFF at the master), sometimes (rarely) arrives correctly, sometimes it's bit shifted by 2 bits.
Sorry could you please double check the commit id — 58955a you've mentioned doesn't seem to be a commit in IDF repo?
Hi @igrr
I was missing one more letter to be uniquely identifiable :)
I changed the title/links to the right one, but here's a more direct one: https://github.com/espressif/esp-idf/commit/58955a79a27d3c7331eaec6e464878df42615a36
I'm talking specifically about the changes to spi_slave.c. I haven't tested or reverted the other changes.
@dralves Can you provide the MCU you are using, as well as the spi clock speed?
The problem is that, ESP32 slave has a delay (quite large!!!!!) on the MISO line after the SPI clock launch edge. When the GPIO matrix is used, it's 62.5ns, and if IOMUX isused, it's 37.5ns. Which means, it cannot meet the timing requirements when the SPI clock is above 8MHz (GPIO matrix), or 13MHz (IOMUX) (See the programming guide).
In the code before, we mistakenly shifted the timing of both launch and latch edge by half a spi clock ahead. This solves some DMA issues, and made the timing performance better in some high frequency cases.
But basicly it's incorrect, and complained by someone else in #1346 and #2393 . So we fixed it.
If you prefer the timing configurations, you can:
If you are using mode 1/3, set the mode of slave to mode 0/2, it's half a clock ahead then mode 1/3. But the DMA should be disabled.
If you are using mode 0/2, turn on the DMA. The workaround in 58955a7 will help you shift the edges half a clock ahead.
Hi @ginkgm
The MCU/SPI master is a PIC32, mode 0, the master sets the speed at 10Mhz.
I read the docs you wrote :), so even though at first I was using the GPIO matrix I changed to IOMUX. Then I started reducing the speed I went all the way down to 5MHz and stlll the comms were unreliable. That's when I finally tried looking into reverting the changes. SPI had been super-reliable before.
Oh, btw, not sure if relevant this exact same code & mode & frequency on the master works flawlessly against an ESP8266.
Another note I did see the comma becoming slightly better as I reduced speed. But at 5 MHz they were still very unreliable and that’s the lowest I could go in terms of speed
Yet another note:
The table in #1346 states:
Registers
mode0
mode1
mode2
mode3
SPI_CK_IDLE_EDGE
0
0
1
1
SPI_CK_I_EDGE
0
1
1
0
SPI_MISO_DELAY_MODE
0
0
0
0
SPI_MISO_DELAY_NUM
0
0
0
0
SPI_MOSI_DELAY_MODE
2
1
1
2
SPI_MOSI_DELAY_NUM
0
0
0
0
but the currently checked in code is:
if (mode == 0) {
//The timing needs to be fixed to meet the requirements of DMA
spihost[host]->hw->pin.ck_idle_edge = 1;
spihost[host]->hw->user.ck_i_edge = 0;
spihost[host]->hw->ctrl2.miso_delay_mode = 0;
spihost[host]->hw->ctrl2.miso_delay_num = 0;
spihost[host]->hw->ctrl2.mosi_delay_mode = 2;
spihost[host]->hw->ctrl2.mosi_delay_num = 2;
If I understand things correctly, this doesn't match the table. Not sure whether the table is accurate, but according to the table this should be:
if (mode == 0) {
//The timing needs to be fixed to meet the requirements of DMA
spihost[host]->hw->pin.ck_idle_edge = 0;
spihost[host]->hw->user.ck_i_edge = 0;
spihost[host]->hw->ctrl2.miso_delay_mode = 0;
spihost[host]->hw->ctrl2.miso_delay_num = 0;
spihost[host]->hw->ctrl2.mosi_delay_mode = 2;
spihost[host]->hw->ctrl2.mosi_delay_num = 0;
@dralves
The TRM is already updated: https://www.espressif.com/sites/default/files/documentation/esp32_technical_reference_manual_en.pdf
@ginkgm might it be hardware rev dependent? I have quite a few esp32 boards, but mainly use adafruits huzzah32 with the ESP-WROOM-32 (not sure which rev, but think it's pretty old). I say this because the current implementation is supposed to work well at low frequencies, (<7MHz even if I got the GPIO/IOMUX thing wrong) and that's not what I observed.
@ginkgm in any case if this is only a problem for me, I already have my own fork of esp-idf (where I include arduino as a component) so I can just maintain my fork with this additional change. Feel free to close this if you think it's not relevant/not a problem for other folks.
I just raised the issue in case others run into the same problem where their boards are working fine and suddenly stop working after an update.
Some suggestions on using PIC32:
I assume you are using the model as this spec
Mode 0 correspond to CKP=0, CKE=1.
SP40=15ns, 1/(15ns+62.5ns)/2=6.5MHz. I think the slack is small, and maybe you are still via the GPIO matrix? Maybe you can enable the debug message in the spi_common.c to see whether you're using the IOMUX or not.
Finally, you can set SMP=1 to delay the master sample time, or use the trick I mentioned above to advance the slave launch time.
thanks @dralves
@ginkgm
@dralves thanks alot!
|
2025-04-01T06:38:36.785308
| 2021-08-10T10:35:40
|
964837397
|
{
"authors": [
"MaBecker",
"gfwilliams"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:5715",
"repo": "espruino/BangleApps",
"url": "https://github.com/espruino/BangleApps/issues/782"
}
|
gharchive/issue
|
Bangle2 setting menu style for lefties
As lefti I scroll with my left hand through the settings menu and my finger hides the text.
Can we have a left hand menu style with text that is right aligned?
You mean with values on the left and the text on the right?
That's probably not something I'd build in but it should be trivial to add as an app that replaces the built-in E.showMenu. I'm happy to help with that
After using it for more setting, I really like to have a larger font because of the size of my finger can’t scroll between lines easy …..
The menu is designed so that you just move your finger up and down, rather than having to tap on a specific menu item for exactly that reason (I found even 50% larger didn't really help). The sensitivity of that scrolling could easily be less (or even configurable) though?
Oh yes, change the sensitivity for those large menues would really help.
just to add this may be fixed by https://github.com/espruino/BangleApps/issues/1040 I guess
No response - lets assume that #1040 did fix it
|
2025-04-01T06:38:36.804502
| 2018-10-26T13:57:39
|
374396188
|
{
"authors": [
"estevez-dev"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:5716",
"repo": "estevez-dev/ha_client",
"url": "https://github.com/estevez-dev/ha_client/issues/157"
}
|
gharchive/issue
|
Card group switches
Add switch to a card header if there is entities with the same domain in the card and group switch is not disabled by configuration.
Done
|
2025-04-01T06:38:36.851035
| 2023-12-14T22:33:50
|
2042605348
|
{
"authors": [
"DennisRutherford",
"estruyf",
"yuxin1234"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:5717",
"repo": "estruyf/doctor",
"url": "https://github.com/estruyf/doctor/issues/155"
}
|
gharchive/issue
|
"Process markdown files" hangs[BUG]
Describe the bug
A clear and concise description of what the bug is.
To Reproduce
Steps to reproduce the behavior:
doctor publish --outputFolder .
login to Microsoft Device Login using code provided
"Process markdown files" hangs for 30 min
There is no output in the output folder
Expected behavior
Process markdown files work.
There are output in output folder
Screenshots
Desktop (please complete the following information):
OS: Windows
Version: 10
Additional context
Add any other context about the problem here.
@estruyf Could you please look into the bug? Thanks.
can confirm i'm having the same issue.
Reproduced from a docker container - node:lts
@yuxin1234 @DennisRutherford could you try to run doctor publish --debug to see if it gives more information on why it hangs?
I just released version 1.12.0, which now supports Node.js version 18 and higher. If you could update to the latest version and test it again, that would be great.
Can confirm that latest version is now working to publish
Thanks @DennisRutherford for verifying
@estruyf Happy to help.
I've got another issue where this is happening again.
If I try and use certificate based authentication it gets stuck; but if I use device code it works fine. Any ideas?
@estruyf Thanks for fixing it. Still hanging for me after I upgraded to 1.12.0.
@yuxin1234 What do you get when you add the --debug flag?
Can you give more information about your environment? Node version, ...
@estruyf Below is the screenshot for running "doctor publish --debug":
Node: 18.0.0
Platform: Windows 10
Thanks.
Ah, I see you are not using SharePoint Online, but your own server. That might be the issue. As I have no access to an on-prem server, I won't be able to test out that use-case.
@estruyf Thanks.
|
2025-04-01T06:38:36.904325
| 2018-05-31T06:15:11
|
328007018
|
{
"authors": [
"bcumming",
"halfflat"
],
"license": "BSD-3-Clause",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:5718",
"repo": "eth-cscs/arbor",
"url": "https://github.com/eth-cscs/arbor/pull/496"
}
|
gharchive/pull-request
|
generalize time sequences
Changes to libarbor
Time sequences were added in src/time_sequence.hpp:
added new time_seq type that implements a type-erasure interface for the
concept of a time sequence generator.
added poisson, regular and vector-backed implementations of the time sequence
concept.
Event generators:
The poisson, regular and vector-backed implementations of the event generator
concept were refactored to use the.
Cell groups:
Removed the dss_cell_group and rss_cell_group and associated types.
Added a generic spike source cell that generates a sequence of spikes
at time points specified by a time_seq. Using this approach, an
additional cell_group specialization is not required for each type of
sequence, and user-defined sequences can be used with minimal overhead.
Unit tests
Added unit tests for time_seq.
Simplified event_generator unit tests, because much of the testing
of the sequences was moved to the time_seq tests.
Added unit tests for spike_source_cell_group.
Changes to miniapp
simplified the miniapp by removing the command line options for using an input spike chain from file.
updated the miniapp recipe to use spike_source cell group instead of dss_cell_group.
So, I'd like still to rename (and possibly promote outside the class) time_seq::dummy_seq, and still arguing about vector_time_seq. Other issue about splitting spike_source_cell out from spike_source_cell_group.hpp I can do later.
tests/unit/test_rss_cell_group.cpp and tests/unit/test_dss_cell_group.cpp should be removed, too.
|
2025-04-01T06:38:36.911361
| 2020-07-22T09:01:05
|
663596647
|
{
"authors": [
"sjpb",
"vkarak"
],
"license": "BSD-3-Clause",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:5719",
"repo": "eth-cscs/reframe",
"url": "https://github.com/eth-cscs/reframe/issues/1430"
}
|
gharchive/issue
|
Performance variables must be numbers
I tried putting the output of "git commit" into a perf_var so that I could link changes in performance history to changes in the system configuration, but I get this error:
FAILURE INFO for ...
<snip>
* Failing phase: performance
<snip>
Reason: sanity error: the value extracted for performance variable 'alaska:roce-openmpi4-ucx:git_ref' is not a number: a2cccdd-dirty
There's nothing in the docs I can see that state performance variables need to be numbers.
Is there a way around this, or a better way of achieving the same thing?
Hi @sjpb, this behaviour is expected and it has been reported in #1146 (where ReFrame was just crashing instead of giving a message). The comment here describes what happens and why we expect a number: https://github.com/eth-cscs/reframe/issues/1146#issuecomment-580682324
What do you want to achieve exactly? How your references look like for this "performance variable?"
Ok - so then this is really just a documentation issue. I'd assumed that with a reference which looked like this 'git_ref': (None, None, None, 'n/a'),, i.e. no reference, a non-numeric perf var would be ok.
What I was trying to achieve (and this obviously isn't the right way) was to get a git commit into the performance log, linked to a test run, so I can link changes in performance to changes in configuration. Still interested in a way of doing that!
I'd assumed that with a reference which looked like this 'git_ref': (None, None, None, 'n/a'),, i.e. no reference, a non-numeric perf var would be ok.
This is something we could implement easily, since it seems that people want to use the performance variables to log non-performance information. I will open a feature request for that.
What I was trying to achieve (and this obviously isn't the right way) was to get a git commit into the performance log, linked to a test run, so I can link changes in performance to changes in configuration. Still interested in a way of doing that!
Makes sense. One way you could possibly do that currently, is to try to pass the git hash as a "unit", the last element of the tuple, and make sure that what you extract as a value for this variable from the output is a number (anything). Then the git hash will be logged as the unit of that variable. I know it's hacky, but it should work.
FYI my eventual solution for this was to push the info into the tag instead. It isn't really a performance variable, and actually logically it makes more sense to have this available on each line of the performance log (like the reframe version) rather than generating a new line (="observation") for it. So maybe the current perf var functionality shouldn't be changed, although using tags is a bit hacky too. I considered using the info field but that seemed less appropriate.
I agree. This one needs more thinking. How did you make it log the tags? Did you add another log format specifier for tags?
I just added %(check_tags)s to the log format - then (outside of reframe) I have code which parses the perflogs.
I see. There is also #1068 that requests other check fields to be logged as well and I'm thinking to make this more generic, so that you could easily select any test field to log, even custom ones. How that sounds?
Yes that would work. Tags actually work fine to be honest though - as reframe nicely formats them as a comma-separated string in the log they're pretty easy to handle. I guess conceptually "tags" and "things I want to log" are different though.
I'm closing this, too. It'll be addressed by #1068.
|
2025-04-01T06:38:36.914045
| 2023-11-12T23:45:53
|
1989638792
|
{
"authors": [
"ethanaobrien",
"marcsadler"
],
"license": "CC0-1.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:5720",
"repo": "ethanaobrien/docz",
"url": "https://github.com/ethanaobrien/docz/pull/8"
}
|
gharchive/pull-request
|
Miscellaneous markdown formatting edits
Edit markdown formatting and page layout for consistency, clean up trailing spaces and unnecessary blank lines in code to address #87
Fix non-sequential numbering
Move install directions for Chrultrabook Controller to post-install.md
Remove old information regarding running Windows on RW_LEGACY if using Ryzen
Merging should fail as you've changed the source directory of the docs, let me know if it doesn't work and I can fork your repo and open a PR that way.
https://github.com/ethanaobrien/docz/commit/a1f8c646b1f688865e6f45e53fba0e54b82c8bdc
|
2025-04-01T06:38:36.916145
| 2022-02-01T22:17:12
|
1121257428
|
{
"authors": [
"allancoding",
"ethanaobrien"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:5721",
"repo": "ethanaobrien/emuserver",
"url": "https://github.com/ethanaobrien/emuserver/issues/7"
}
|
gharchive/issue
|
Testing the server
I get this error in the console:
Uncaught (in promise) TypeError: Cannot read properties of undefined (reading 'getUrls')
at (index):26:37
/favicon.ico:1 Failed to load resource: the server responded with a status of 404 ()
(index):19 Uncaught TypeError: Cannot read properties of undefined (reading 'start')
at HTMLButtonElement.<anonymous> ((index):19:55)
https://emulatorjs.allancoding.ga/
And I can not get the server to start.
This is strange. do you think you might be able to look into it a bit? I've been trying to work on the next version of emulatorjs
I have a question is the server supposed to start automatically?
No, it is not
This problem was fixed the the newest update.
|
2025-04-01T06:38:36.923660
| 2020-12-16T09:20:42
|
768622131
|
{
"authors": [
"JohnMcLear",
"jinbullsushil"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:5722",
"repo": "ether/ep_font_color",
"url": "https://github.com/ether/ep_font_color/issues/25"
}
|
gharchive/issue
|
Color copy past not working
Hi All,
When I am trying to copy and paste the color and font size. Formatting is not working.
I have used ep_font_color for the color option.
Desktop:
OS: Window 10
Browser : chrome
Version : 87.0.4280.88
Tested here and it works fine:
https://video.etherpad.com/p/aUeq2TntWMJeyh7e_uIQ
Please test latest code (Etherpad and plugin) before creating issues :)
Hi JohnMcLear,
I don't know why you have closed this issue.
I have tested the issue https://video.etherpad.com/p/aUeq2TntWMJeyh7e_uIQ . The same issue is also showing on the provided URL.
I have also attached the video file for your reference.
Can you replicate the bug in firefox? Was this working as expected in Chrome before?
Looks like an upstream bug ;\
Hi JohnMcLear,
It seems to working in Firefox. But it is not working in Chrome. That would be helpful if you suggest anyway to fix it.
Thanks
Prolly a content collector bug related to contenteditable. Check shared.js which handles collection of pasted content.
Also try git bisect on the plugin and on develop branch to see if it's a new bug or recently introduced.
|
2025-04-01T06:38:36.941178
| 2018-05-28T22:50:22
|
327128942
|
{
"authors": [
"postables",
"shanev"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:5723",
"repo": "ethereum-lightning/eth-lnd",
"url": "https://github.com/ethereum-lightning/eth-lnd/issues/2"
}
|
gharchive/issue
|
Go bindings and tests
[ ] auto-generate bindings whenever Solidity file changes using go generate
[ ] tests using blockchain simulator
One thing to keep in mind about the blockchain simulator, I have never ever been able to get the adjust time function to work from go-ethereum's simulated backend:
https://godoc.org/github.com/ethereum/go-ethereum/accounts/abi/bind/backends#SimulatedBackend.AdjustTime
@postables Ah that sucks. We may not have to use it if we go with block height vs. time for the HTLC.
|
2025-04-01T06:38:36.950905
| 2023-11-06T03:57:11
|
1978236272
|
{
"authors": [
"Pandapip1",
"minkyn"
],
"license": "CC0-1.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:5724",
"repo": "ethereum/ERCs",
"url": "https://github.com/ethereum/ERCs/pull/91"
}
|
gharchive/pull-request
|
Add ERC: Cross-Contract Hierarchical NFT
Reopen this ongoing draft from the old PR in repo EIPs with all comments addressed.
@xinbenlv Please take another look. Previous comments include adding a set method and supplementing security considerations.
@SamWilsn Please help with the merge due to this issue
How do I request to get approval from @eip-review-bot please?
@SamWilsn Sam, can you please help take a look why the bot didn't automatically approve it when all other checks have been satisfied?
@eth-bot rerun
|
2025-04-01T06:38:36.952289
| 2016-11-22T10:33:09
|
190962247
|
{
"authors": [
"LianaHus",
"axic",
"chriseth"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:5725",
"repo": "ethereum/browser-solidity",
"url": "https://github.com/ethereum/browser-solidity/issues/337"
}
|
gharchive/issue
|
Static analysis: warn about too long contracts
After "spurious dragon", the deploy size is limited to 24576 bytes, this should be checked.
This is not really the static analyzer as it doesn't operate on the AST, rather a warning from remix.
fixed
We actually added this code to the compiler too :)
|
2025-04-01T06:38:36.955485
| 2019-06-30T10:08:11
|
462389030
|
{
"authors": [
"JustinDrake",
"djrtwo"
],
"license": "CC0-1.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:5726",
"repo": "ethereum/eth2.0-specs",
"url": "https://github.com/ethereum/eth2.0-specs/pull/1237"
}
|
gharchive/pull-request
|
Helpers cleanup
Addresses #918. It includes:
Grouping helpers by category
Renamings (for consistency, length, clarity)
Add top-level function comments where missing, and make them consistent
Remove neglected and sometimes misleading terminology section (to be readded post-freeze)
Various other cosmetic cleanups
I tried making BLS_WITHDRAWAL_PREFIX a Bytes1 but broke something. (I think the spec builder is confused somehow.)
I think we should leave as is instead of the "default" value here. It is a configurable constant that might be changed in the future (or in different deployments)
I'd prefer to revert to my last commit and get this merged. We have other things to handle that are waiting on this PR
I think we should leave as is instead of the "default" value here.
That's fine :) (Tried moving it to see if spec builder would be happier.) Making it Bytes1() is still probably the way forward.
|
2025-04-01T06:38:37.062151
| 2021-02-20T11:42:51
|
812588330
|
{
"authors": [
"Eknir"
],
"license": "BSD-3-Clause",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:5727",
"repo": "ethersphere/bee",
"url": "https://github.com/ethersphere/bee/issues/1303"
}
|
gharchive/issue
|
Data stewardship
User story
Story
As a local pinner, I want to validate if my locally pinned content is available in the network and reupload the content if that is not the case, such that I can guarantee availability of the content via Swarm
Acceptance criteria
A local pinner can run a process which periodically validates if certain content is available in the network and reuploads the content if it is not availble
Background
With this feature, it becomes easier to guarantee availability of certain content in the network
To validate whether the content is available in the network, we should watch out that the node whom we are retrieving from didn't cache it
This feature may be implemented fully 2nd-layer
Bonus points for considering integration with web applications or swarm-cli
Tasks
Task
Assignee
Done (1), or 0
@agazso
Duplicate of #1508
|
2025-04-01T06:38:37.063569
| 2022-01-25T09:00:34
|
1113607242
|
{
"authors": [
"AuHau"
],
"license": "BSD-3-Clause",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:5728",
"repo": "ethersphere/beeload-action",
"url": "https://github.com/ethersphere/beeload-action/issues/14"
}
|
gharchive/issue
|
PR previews are not updated with more commits pushed to the PR
It looks like the PR previews are not updated upon new commits landing to the PR.
For example see here: https://github.com/ethersphere/bee-js-docs/pull/97
This is most probably not problem of this action but other automation that does not trigger the action.
|
2025-04-01T06:38:37.067648
| 2023-07-06T11:06:53
|
1791331956
|
{
"authors": [
"ch4r10t33r",
"lbw33"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:5729",
"repo": "etherspot/etherspot-prime-contracts",
"url": "https://github.com/etherspot/etherspot-prime-contracts/pull/18"
}
|
gharchive/pull-request
|
New EtherspotPaymaster deployments and updated README and deploy scripts
Description
Deployed new EtherspotPaymaster implementation to supported chains.
Added new 'required' tag for deploying just wallet factory and paymaster.
Updated README with new info on deploying contracts and cleaned up code.
Added initialBaseFeePerGas in Chiado network configs as required for deployment.
Motivation and Context
There is a new implementation of the EtherspotPaymaster after a couple of minor bug fixes.
We require a method for deploying solely the EtherspotWalletFactory & EtherspotPaymaster contracts.
How Has This Been Tested?
Screenshots (if appropriate):
Types of changes
[x] Bug fix (non-breaking change which fixes an issue)
[ ] New feature (non-breaking change which adds functionality)
[ ] Breaking change (fix or feature that would cause existing functionality to change)
@lbw33 Pls can you resolve the conflicts?
|
2025-04-01T06:38:37.072501
| 2023-05-03T13:25:33
|
1694075924
|
{
"authors": [
"adamsachs",
"rsilvery"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:5730",
"repo": "ethyca/fides",
"url": "https://github.com/ethyca/fides/issues/3205"
}
|
gharchive/issue
|
Ensure data use and processing activity fields are locked down immediately after save
Is your feature request related to a specific problem?
A general requirement for the data use declaration form is that the Data use and Processing Activity fields are effectively locked-down in the UI (made read-only) once a data use declaration has been created on a given system. While this behavior is generally the case currently, there's an edge case where the fields are still editable immediately after saving the data use declaration initially, before navigating away from the form.
Although it may be unlikely, if users edit those fields in that state when they are still editable but after saving, they could end up breaking links between any custom fields and the data use declaration, which is a side effect of how our API is implemented (and the reason we want to lock those fields down generally).
Describe the solution you'd like
The fields should be locked down immediately after saving the data use declaration, and stay locked down indefinitely.
Describe alternatives you've considered, if any
In general we'll look to rework the data use/privacy declaration API to not require this constraint, but that's a longer-term effort.
Additional context
Found in doing some 2.12.0 release testing
cc @TheAndrewJackson @rsilvery @mfbrown @Kelsey-Ethyca
Is this resolved @adamsachs ?
Is this resolved @adamsachs ?
nope, still there!
@adamsachs still an issue?
@adamsachs still an issue?
yup, tested locally and this behavior still seems to be there.
|
2025-04-01T06:38:37.095404
| 2022-09-02T13:58:05
|
1360235914
|
{
"authors": [
"chriscalhoun1974",
"pattisdr",
"seanpreston"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:5731",
"repo": "ethyca/fidesops",
"url": "https://github.com/ethyca/fidesops/pull/1247"
}
|
gharchive/pull-request
|
Updated Configuration settings: Connector parameters and Dataset configuration
Purpose
Allows user to create a DB, Manual, and SASS connection. User enters connector parameters and dataset configuration for a given connection.
This PR includes the following JIRA tickets:
922 - Add a Connector - DB connector configs
923 - Add a Connector - upload a DB Dataset YAML
1090 - Add a Connector - SaaS Dataset Management (YAML method)
1015 - Frontend - Configure a Manual entry Connector
Changes
Checklist
[x] Update CHANGELOG.md file
[x] Merge in main so the most recent CHANGELOG.md file is being appended to
[x] Add description within the Unreleased section in an appropriate category. Add a new category from the list at the top of the file if the needed one isn't already there.
[x] Add a link to this PR at the end of the description with the PR number as the text. example: #1
[ ] Applicable documentation updated (guides, quickstart, postman collections, tutorial, fidesdemo, database diagram.
If docs updated (select one):
[ ] documentation complete, or draft/outline provided (tag docs-team to complete/review on this branch)
[ ] documentation issue created (tag docs-team to complete issue separately)
[ ] Good unit test/integration test coverage
[ ] This PR contains a DB migration. If checked, the reviewer should confirm with the author that the down_revision correctly references the previous migration before merging
[ ] The Run Unsafe PR Checks label has been applied, and checks have passed, if this PR touches any external services
Ticket
Fixes #922 #923 #1090 #1015
@seanpreston If you would like to test the Create New Connection feature, you can edit the flags.json file by assigning the isActive attribute to true.
Don't forget to execute npm i via clients/ops/admin-ui Terminal
Hey @chriscalhoun1974 — I've given this a first pass and found the following issues:
There’s no way to edit a dataset / connectionconfig once one is created
[image:1C215A1B-F392-48DA-B89F-FAF76A99CE9E-494-0000112282D89F56/Screenshot 2022-09-02 at 14.41.11.png]
Save YAML system throws an error without making any network request
[image:DAB1D3ED-9C6D-4E4F-9C4F-541F0B43700A-494-00001125BD6FD817/Screenshot 2022-09-02 at 14.41.24.png]
“Cancel” button throws an error
Navigating from “dataset configuration” to “connector parameters” and back to “dataset configuration” removes any yaml
[image:01E86707-0980-496F-BA00-7C1DCEC73FE9-494-0000113952A29BF6/Screenshot 2022-09-02 at 14.42.48.png]
The yaml input is buggy — the linter highlights errors where none are
@seanpreston I have updated the Dataset YAML editor to reference the @monaco-editor/react and js/yaml NPM packages. All of the issues have been resolved now. Let me know if you have any questions. Thank you.
Thanks @chriscalhoun1974 — the Monaco editor is much nicer! This is nearly there, just a couple more things to fix:
The API to create a dataset up must always send a list of datasets
There's an error thrown when hitting the "Cancel" button on that same page
These issues are only for DB connections, SaaS connections worked well.
@seanpreston @pattisdr If the user clicks either the Cancel or Save button, the user will be redirected to the Database Connections landing page. In addition, when a connection is initially created the user will be auto redirected to either the Dataset configuration or DSR customization screen accordingly. This enhancement will provide a better overall user experience.
Thanks @chriscalhoun1974 — the issues I highlighted earlier are fixed up. There's just one issue here that's a showstopper which is:
the Create Connection doesn't show if no connectors are present in the DB
cc @adamczepeda too because I've noticed we're generally omitting empty states from the designs
These others are smaller things that shouldn't block us merging (which I'll create follow-up tickets for):
Errors returned by the API for incorrectly formatted yaml are no help to the user
"Amazon Redshift" isn't searchable by the string "ama" because the connector is indexed only as "redshift". We should be consistent with naming here, for instance BigQuery isn't also referred to as Google BigQuery. Let's pick a convention and get that working well across all connectors
@seanpreston @adamczepeda The Create Connection doesn't show if no connectors are present in the DB issue has been resolved now.
I've tested this again and found another four clean-up tasks, but nothing that'll stop us merging this increment as it's getting very large now.
https://github.com/ethyca/fidesops/issues/1333
https://github.com/ethyca/fidesops/issues/1334
https://github.com/ethyca/fidesops/issues/1335
https://github.com/ethyca/fidesops/issues/1336
One followup, to make it more visible if we have a partially created webhook, we might call the secrets endpoint (with an empty dictionary) or the test endpoint when filling out the first screen of the manual webhook, which will put it in a "failed" state. Then when the fields are added, we run the secrets/test endpoint again so it should pass (this resource is only checked to see if the webhook and fields exist).
This will then flag in the UI if a webhook is only partially filled out.
|
2025-04-01T06:38:37.111967
| 2019-11-03T21:23:04
|
516890969
|
{
"authors": [
"codecov-io",
"etingof",
"russhousley"
],
"license": "bsd-2-clause",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:5732",
"repo": "etingof/pyasn1-modules",
"url": "https://github.com/etingof/pyasn1-modules/pull/98"
}
|
gharchive/pull-request
|
Add support for RFC 5916
Add module and test for RFC 5916
Codecov Report
Merging #98 into master will increase coverage by <.01%.
The diff coverage is 100%.
@@ Coverage Diff @@
## master #98 +/- ##
==========================================
+ Coverage 99.35% 99.35% +<.01%
==========================================
Files 88 89 +1
Lines 5758 5766 +8
==========================================
+ Hits 5721 5729 +8
Misses 37 37
Impacted Files
Coverage Δ
pyasn1_modules/rfc5916.py
100% <100%> (ø)
Continue to review full report at Codecov.
Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 2e6acd1...45b5fe2. Read the comment docs.
Thank you!
|
2025-04-01T06:38:37.115301
| 2019-12-18T09:38:27
|
539564211
|
{
"authors": [
"aryes",
"lextm"
],
"license": "bsd-2-clause",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:5733",
"repo": "etingof/pysnmp",
"url": "https://github.com/etingof/pysnmp/issues/333"
}
|
gharchive/issue
|
Receiving V3 Traps - Engine ID
We used to work with PySNMP version 4.3.1 to receive SNMP V3 traps, and it worked perfectly. Recently, we upgraded to version 4.4.12, and the traps were not received anymore.
I debugged the issue and found that the call to __getUserInfo at service.py line 759 throws a NoSuchInstanceError exception:
# 3.2.4
try:
(usmUserName,
usmUserSecurityName,
usmUserAuthProtocol,
usmUserAuthKeyLocalized,
usmUserPrivProtocol,
usmUserPrivKeyLocalized) = self.__getUserInfo(
snmpEngine.msgAndPduDsp.mibInstrumController,
msgAuthoritativeEngineId, msgUserName
)
I think it happens because the engine ID in the trap is not the same as the engine ID of the user I created for receiving the traps.
As far as I understand from the specs, we need to use the same engine ID for the receiving user and the trap sender.
If this is the case, why did it work in PySNMP version 4.3.1? Was it a bug in the library? Is engine ID matching not really mandatory?
The engine ID matching is mandatory as documented in the standard, so 4.3.1 indeed has a bug there and 4.4.12 contains the fix.
|
2025-04-01T06:38:37.125851
| 2023-10-21T00:58:01
|
1955177747
|
{
"authors": [
"bythehist",
"etra0"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:5734",
"repo": "etra0/litcher",
"url": "https://github.com/etra0/litcher/issues/16"
}
|
gharchive/issue
|
Access Denied When running litcher
I had it working and it started giving me this all of a sudden, not sure what changed here.
running it as admin
Did you update Windows? I believe this could be related to #12 sadly, and I haven't fixed it yet. It requires for me to update a dependency, a big TODO from my side.
I have been on Windows 11 this whole year, there was just an update the other week, so maybe that broke it? No worries, no rush.
If you have the time, could you test this version? https://github.com/etra0/litcher/releases/tag/v0.4.0-alpha
hudhook has been updated heavily since then.
|
2025-04-01T06:38:37.135004
| 2020-10-30T00:50:20
|
732799149
|
{
"authors": [
"Kangaroux",
"bluebandit21",
"poco0317"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:5735",
"repo": "etternagame/etterna",
"url": "https://github.com/etternagame/etterna/issues/916"
}
|
gharchive/issue
|
Update Building.md to mention how to build a release configuration
Building.md doesn't currently mention explicitly how to build in a release configuration.
This is largely fine, since very few people would ever want to build macOS or Windows in non-debug, since there are pre-built releases available.
However, for Linux users, where the only way to play the game is through compiling it yourself, having the default build type be debug is a footgun for poor game performance.
It would be nice to have an explicit note in Building.md as to how you can build the game in release, and that you should do so if compiling on Linux to actually play the game, not develop.
An alternate (possibly poor) idea would be to have the default build be release in Linux only, still defaulting to debug for non-linux builds. Users would then have to explicitly opt into a debug build in Linux, which might be a more reasonable behavior.
I'd make the PR to update it myself right now, but I'm both tired and busy so this issue is a note to myself to add it.
675508bb542268704e3a39d80fac9adeeee41808
@poco0317 That commit never made its way into master
An update has not released, thus nothing has been merged to master.
|
2025-04-01T06:38:37.184707
| 2018-12-08T21:31:46
|
388961351
|
{
"authors": [
"jabrena"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:5736",
"repo": "ev3dev-lang-java/ev3dev-lang-java",
"url": "https://github.com/ev3dev-lang-java/ev3dev-lang-java/issues/612"
}
|
gharchive/issue
|
Add a new command to check sdcard info
https://unix.stackexchange.com/questions/273971/how-to-get-hard-disk-information-on-linux-terminal
Rejected by LEAN
|
2025-04-01T06:38:37.259201
| 2020-01-29T18:34:14
|
557041496
|
{
"authors": [
"kaimantsch",
"trixr"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:5739",
"repo": "evbacher/gd2md-html",
"url": "https://github.com/evbacher/gd2md-html/issues/57"
}
|
gharchive/issue
|
Excessive newlines appear before lists in converted document
While markdown ignores more than 2 newlines, the converter currently generates 4 before lists. This creates an awkward looking document and requires a lot of search/replace to repair.
Note: this issues occurs in every case I tested, including after paragraphs and a range header types (h1, h2, h3).
Example:
## Outcomes
1. Improv
There is a similar issue with extra newlines before and after horizontal rules. There should only be one empty line before and one after.
Some text.
---
Following text.
|
2025-04-01T06:38:37.282949
| 2023-03-26T10:26:43
|
1640853632
|
{
"authors": [
"andig",
"naltatis",
"pauxus"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:5740",
"repo": "evcc-io/evcc",
"url": "https://github.com/evcc-io/evcc/issues/7062"
}
|
gharchive/issue
|
Safeguard for wrong climating notification
Is your feature request related to a problem? Please describe.
Currently Renault seems to report the climater as always active. This lead to draining the house battery quite a while until I noticed the actual problem. Even switching to "off" did not stop the charging, I had to manually unplug.
Describe the solution you'd like
There could be a couple of solutions.
it would be really useful if the climate-state would bei shown in the UI, this would have allowed me to find the problem a lot earlier.
#6588 would at least pose a quick workaround in the specific case
"off" should also turn of climater based charging
Some sort of sanity check: i.e. climater charge should only run for an hour or so.
Additional context
I noticed that "log level trace" does not show any requests for Renault (anymore). Has this changed?
Sollte jetzt per poll-mode mindestens als Workaround gelöst sein?
it would be really useful if the climate-state would bei shown in the UI, this would have allowed me to find the problem a lot earlier.
/cc @naltatis haben wir das jetzt in den Notifications mit drin?
@andig der Climate Status ist schon vor einiger Zeit (neues Design) aus der UI rausgeflogen. Die Information ist in der API aber noch da. Würde vorschlagen, dass wir das als Statustext wieder mit aufnehmen.
|
2025-04-01T06:38:37.294768
| 2016-04-05T17:54:11
|
146061708
|
{
"authors": [
"evcohen",
"lencioni"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:5741",
"repo": "evcohen/eslint-plugin-jsx-a11y",
"url": "https://github.com/evcohen/eslint-plugin-jsx-a11y/issues/6"
}
|
gharchive/issue
|
img-uses-alt does not allow empty string or possible empty string
This rule considers the following JSX bad:
function Foo() {
return <img alt={foo || ''} />;
}
this is also considered bad:
function Foo() {
return <img alt="" />;
}
but this is okay:
function Foo() {
return <img alt={foo} />;
}
However, I think that all three should be okay. Empty strings can be used appropriately for alt text on images that are decorative.
I agree on the first one, since we can't determine the value of foo until runtime. However, not sure about the second - You can place a space in between the quotation marks (<img alt=" " />) and it should pass and still semantically represent the same thing to a screen reader. Also, I can implement case where alt="" (or any other form of undefined value) passes lint rule if role=presentation is present.
I think the role=presentation bit makes sense. According to the spec, it should probably enforce alt="" if it has a presentation role.
Authors SHOULD NOT provide meaningful alternative text (for example, use alt="" in HTML4) when the presentation role is applied to an image.
https://www.w3.org/TR/wai-aria/roles#presentation
Fixing first example in 0.5.3 - will upgrade minor on role=presentation enhancement. 0.5.3 should be done within the hour.
Awesome! Thanks again!
0.5.3 published - should fix first use case + other bugs that are closed!
I think this is still broken for cases like
function Foo() {
return <img alt={foo.bar || ''} />;
}
and
function Foo() {
return <img alt={bar() || ''} />;
}
and
function Foo() {
return <img alt={foo.bar() || ''} />;
}
Added test cases for those and fixed in v0.5.4 - still may be other edge cases, working on resolving cases to handle each type specified in spec
Wonderful!
@evcohen do you have an ETA on the role="presentation" change? No rush--I'm just wondering if I should roll with alt=" " or wait it out.
@lencioni waiting for ci build to pass and then publishing v0.6.0. Error message updated and this strictly allows only the following scenario <img alt="" role="presentation" />
bad:
<img alt={``} role="presentation" /> etc. as we only want to deal with literals for this case.
I noticed that the Chrome audit rules allows alt="" without role="presentation" and role="presentation" without alt="", FYI: https://github.com/GoogleChrome/accessibility-developer-tools/wiki/Audit-Rules#ax_text_02
Use the attributes alt="", role="presentation" or include the image as a CSS background-image to identify it as being used purely for stylistic or decorative purposes and that it should be ignored by people using assistive technologies.
Source: http://fae20.cita.illinois.edu/rule/ARIA_STRICT/IMAGE_2/
Not sure what the real rule is in this case, but as a linter, I think it's better to be opinionated in a case like this. As in, the only time alt can be undefined is when role="presentation". In this sense, we can drop the check for alt altogether if role="presentation" is present. Thoughts?
My reading of the text you posted agrees with the Chrome text I lined to, and it also fits my intuitive understanding. I think it makes sense to enforce the existence of alt unless role="presentation", and if role="presentation" enforce either non-existent or empty alt.
|
2025-04-01T06:38:37.323270
| 2017-06-15T09:22:10
|
236127974
|
{
"authors": [
"mudassirzulfiqar",
"rkhater",
"vishalvanpariya"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:5743",
"repo": "eventtus/photo-editor-android",
"url": "https://github.com/eventtus/photo-editor-android/issues/2"
}
|
gharchive/issue
|
Some feature needed
Hey first of all thumbs up for your library. I have seen different edit libraries but this one satisfy my needs although some features are required. I have one question is it possible to get the position of each elements on the screen before saving the picture
e.g Position of Text , Sticker Because I want to save this entire project while user is editing and will re load to him when he will want to resume the old editing. Is it possible. If it is then can you please do it for me?
@mudassirzulfiqar Thank you for your kind comment, As for this feature yes We are going to have along with some more features next few weeks
I also need these features.
@mudassirzulfiqar you got any solution for it?
Thank You
Vishal Vanpariya
|
2025-04-01T06:38:37.330715
| 2015-10-28T20:17:18
|
113913025
|
{
"authors": [
"jbehrends",
"jlambert121"
],
"license": "apache-2.0",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:5744",
"repo": "evenup/evenup-puppet",
"url": "https://github.com/evenup/evenup-puppet/pull/33"
}
|
gharchive/pull-request
|
Disable the puppetserver report processor feature by default
By default if reports is NOT defined, puppetserver will enable the "store" reports processor which will generate yaml reports in: /opt/puppetlabs/server/data/puppetserver/reports/
See: https://docs.puppetlabs.com/puppet/latest/reference/reporting_about.html#configuring-reporting
If your not expecting this or dealing with this some way, you will probably run your puppetserver out of disk space. This PR defaults this to "none" which will turn this feature off.
The module gives you the ability to set it to none if you'd like, but I believe this change changes the puppet default. I'm curious why the PR instead of just setting the report processor to none in your environment.
Sorry,
I was coming from the angle of a new user. By default if nothing at all is defined pupetserver will start spitting out repot files in a non-obvious directory. I'm going to guess that a new user to puppet would have no idea this is happening, and would eventually run their server out of disk space. So my initial though was this feature should be turned off by default.
I'm ok if your really not in favor of this PR. Maybe as a good alternative we document that setting better in the readme. Maybe list a few possible built-in options: puppetdb,http,store (defult), log?
(I've attended quite a few puppetlabs talks, and I'm actually a bit surprised this setting doesn't default to puppetdb since they encourage you to use it.)
I think I want to leave this as the default, but I'd love some documentation updates!
|
2025-04-01T06:38:37.385094
| 2015-01-13T13:28:40
|
54193421
|
{
"authors": [
"evert0n",
"mvila"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:5745",
"repo": "evert0n/koa-cors",
"url": "https://github.com/evert0n/koa-cors/issues/21"
}
|
gharchive/issue
|
'settings' should be immutable
At the beginning of the middleware handler, there is:
var options = settings || defaults;
Later, at several places, the options object is modified. For example:
options.headers = this.header['access-control-request-headers'];
It means that the global middleware settings object (or defaults if no settings are specified) is modified from one request to another. I think it's dangerous and a source of bugs.
Hey @mvila Thanks, this has been fixed with the PR #25
|
2025-04-01T06:38:37.403327
| 2018-07-01T18:45:42
|
337312531
|
{
"authors": [
"Koerner",
"evertramos",
"xtjoeywx"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:5746",
"repo": "evertramos/docker-wordpress-letsencrypt",
"url": "https://github.com/evertramos/docker-wordpress-letsencrypt/issues/19"
}
|
gharchive/issue
|
413 Request Entity Too Large
I'm trying to upload a large plugin and it won't let me. I'm getting an nginx error page.
To recreate: Upload a plugin using wordpress's plugin uploader. The one I used is 4.3 MB. (.zip)
I'm thinking the solution would be to increase php limits, but they are already high enough in your default uploadsize.ini file.
Thanks for your work on this. It's awesome!
Please check your nginx webproxy to see if you set the option on upload limit there.
Restart the proxy and let me know if it works.
I've set things up as is. I mean, I didn't change, enable, or add anything during my set up of the webproxy.
Do I need to uncomment anything? Maybe this?: USE_NGINX_CONF_FILES=true in the .env file so that it would use the uploadsize.conf file? Also, do I need to add anything to it other than this `client_max_body_size 100m;'
That´s correct. Please uncomment this option and set the upload size as you need and just restart the webproxy containrs.
If you are in production environment, you might want to try (on the webproxy):
docker-compose restart
So, it will not go off-line while restarting, if it does not work you will need to realod your webproxy with this command:
docker exec -it webproxy nginx -s reload
Let me know if it worked.
I did as you said. I uncommented that option and set the upload size. Then I restarted the webproxy containers as well as the wordpress containers. I used docker-compose restart within the folders. Then I tried uploading the plugin and I got the same "413 Request Entity Too Large" error.
I also tried reloading them with this:
docker exec -it nginx-web -s reload But I got this error in terminal: OCI runtime exec failed: exec failed: container_linux.go:348: starting container process caused "exec: \"-s\": executable file not found in $PATH": unknown
I then stopped all of the containers and started them again to see if that would work. Still the same result.
This is my .env file:
#
# docker-compose-letsencrypt-nginx-proxy-companion
#
# A Web Proxy using docker with NGINX and Let's Encrypt
# Using the great community docker-gen, nginx-proxy and docker-letsencrypt-nginx-proxy-companion
#
# This is the .env file to set up your webproxy enviornment
#
# Your local containers NAME
#
NGINX_WEB=nginx-web
DOCKER_GEN=nginx-gen
LETS_ENCRYPT=nginx-letsencrypt
#
# Your external IP address
#
IP=<IP_ADDRESS>
#
# Default Network
#
NETWORK=webproxy
#
# Service Network (Optional)
#
# In case you decide to add a new network to your services containers you can set this
# network as a SERVICE_NETWORK
#
# [WARNING] This setting was built to use our `start.sh` script or in that special case
# you could use the docker-composer with our multiple network option, as of:
# `docker-compose -f docker-compose-multiple-networks.yml up -d`
#
#SERVICE_NETWORK=webservices
#
# NGINX file path
#
NGINX_FILES_PATH=/nginx/data
#
# NGINX use special conf files
#
# In case you want to add some special configuration to your NGINX Web Proxy you could
# add your files to ./conf.d/ folder as of sample file 'uploadsize.conf'
#
# [WARNING] This setting was built to use our `start.sh`.
#
# [WARNING] Once you set this options to true all your files will be copied to data
# folder (./data/conf.d). If you decide to remove this special configuration
# you must delete your files from data folder ./data/conf.d.
#
USE_NGINX_CONF_FILES=true
#
# Docker Logging Config
#
# This section offers two options max-size and max-file, which follow the docker documentation
# as follow:
#
# logging:
# driver: "json-file"
# options:
# max-size: "200k"
# max-file: "10"
#
#NGINX_WEB_LOG_MAX_SIZE=4m
#NGINX_WEB_LOG_MAX_FILE=10
#NGINX_GEN_LOG_MAX_SIZE=2m
#NGINX_GEN_LOG_MAX_FILE=10
#NGINX_LETSENCRYPT_LOG_MAX_SIZE=2m
#NGINX_LETSENCRYPT_LOG_MAX_FILE=10
This is my docker-compose.yml file:
version: '3'
services:
nginx-web:
image: nginx
labels:
com.github.jrcs.letsencrypt_nginx_proxy_companion.nginx_proxy: "true"
container_name: ${NGINX_WEB:-nginx-web}
restart: always
ports:
- "${IP:-<IP_ADDRESS>}:80:80"
- "${IP:-<IP_ADDRESS>}:443:443"
volumes:
- ${NGINX_FILES_PATH:-./data}/conf.d:/etc/nginx/conf.d
- ${NGINX_FILES_PATH:-./data}/vhost.d:/etc/nginx/vhost.d
- ${NGINX_FILES_PATH:-./data}/html:/usr/share/nginx/html
- ${NGINX_FILES_PATH:-./data}/certs:/etc/nginx/certs:ro
- ${NGINX_FILES_PATH:-./data}/htpasswd:/etc/nginx/htpasswd:ro
logging:
options:
max-size: ${NGINX_WEB_LOG_MAX_SIZE:-4m}
max-file: ${NGINX_WEB_LOG_MAX_FILE:-10}
nginx-gen:
image: jwilder/docker-gen
command: -notify-sighup ${NGINX_WEB:-nginx-web} -watch -wait 5s:30s /etc/docker-gen/templates/nginx.tmpl /etc/nginx/conf.d/default.conf
container_name: ${DOCKER_GEN:-nginx-gen}
restart: always
volumes:
- ${NGINX_FILES_PATH:-./data}/conf.d:/etc/nginx/conf.d
- ${NGINX_FILES_PATH:-./data}/vhost.d:/etc/nginx/vhost.d
- ${NGINX_FILES_PATH:-./data}/html:/usr/share/nginx/html
- ${NGINX_FILES_PATH:-./data}/certs:/etc/nginx/certs:ro
- ${NGINX_FILES_PATH:-./data}/htpasswd:/etc/nginx/htpasswd:ro
- /var/run/docker.sock:/tmp/docker.sock:ro
- ./nginx.tmpl:/etc/docker-gen/templates/nginx.tmpl:ro
logging:
options:
max-size: ${NGINX_GEN_LOG_MAX_SIZE:-2m}
max-file: ${NGINX_GEN_LOG_MAX_FILE:-10}
nginx-letsencrypt:
image: jrcs/letsencrypt-nginx-proxy-companion
container_name: ${LETS_ENCRYPT:-nginx-letsencrypt}
restart: always
volumes:
- ${NGINX_FILES_PATH:-./data}/conf.d:/etc/nginx/conf.d
- ${NGINX_FILES_PATH:-./data}/vhost.d:/etc/nginx/vhost.d
- ${NGINX_FILES_PATH:-./data}/html:/usr/share/nginx/html
- ${NGINX_FILES_PATH:-./data}/certs:/etc/nginx/certs:rw
- /var/run/docker.sock:/var/run/docker.sock:ro
environment:
NGINX_DOCKER_GEN_CONTAINER: ${DOCKER_GEN:-nginx-gen}
NGINX_PROXY_CONTAINER: ${NGINX_WEB:-nginx-web}
logging:
options:
max-size: ${NGINX_LETSENCRYPT_LOG_MAX_SIZE:-2m}
max-file: ${NGINX_LETSENCRYPT_LOG_MAX_FILE:-10}
networks:
default:
external:
name: ${NETWORK:-webproxy}
This is my uploadsize.conf file:
client_max_body_size 1000M
Now onto my wordpress files. Here is my wordpress docker-compose.yml file:
version: '3'
services:
db:
container_name: ${CONTAINER_DB_NAME}
image: mariadb:latest
restart: unless-stopped
volumes:
- ${DB_PATH}:/var/lib/mysql
environment:
MYSQL_ROOT_PASSWORD: ${MYSQL_ROOT_PASSWORD}
MYSQL_DATABASE: ${MYSQL_DATABASE}
MYSQL_USER: ${MYSQL_USER}
MYSQL_PASSWORD: ${MYSQL_PASSWORD}
wordpress:
depends_on:
- db
container_name: ${CONTAINER_WP_NAME}
image: wordpress:latest
restart: unless-stopped
volumes:
- ${WP_CORE}:/var/www/html
- ${WP_CONTENT}:/var/www/html/wp-content
- ./conf.d/uploadsize.ini:/usr/local/etc/php/conf.d/uploadsize.ini
environment:
WORDPRESS_DB_HOST: ${CONTAINER_DB_NAME}:3306
WORDPRESS_DB_NAME: ${MYSQL_DATABASE}
WORDPRESS_DB_USER: ${MYSQL_USER}
WORDPRESS_DB_PASSWORD: ${MYSQL_PASSWORD}
WORDPRESS_TABLE_PREFIX: ${WORDPRESS_TABLE_PREFIX}
VIRTUAL_HOST: ${DOMAINS}
LETSENCRYPT_HOST: ${DOMAINS}
LETSENCRYPT_EMAIL: ${LETSENCRYPT_EMAIL}
networks:
default:
external:
name: ${NETWORK}
Here is my wordpress .env file:
# .env file to set up your wordpress site
#
# Network name
#
# Your container app must use a network conencted to your webproxy
# https://github.com/evertramos/docker-compose-letsencrypt-nginx-proxy-companion
#
NETWORK=webproxy
#
# Database Container configuration
# We recommend MySQL or MariaDB - please update docker-compose file if needed.
#
CONTAINER_DB_NAME=db
# Path to store your database
DB_PATH=/wordpress/database/data
# Root password for your database
MYSQL_ROOT_PASSWORD=mypassword
# Database name, user and password for your wordpress
MYSQL_DATABASE=mydatabasename
MYSQL_USER=myusername
MYSQL_PASSWORD=mypassword
#
# Wordpress Container configuration
#
CONTAINER_WP_NAME=wordpress
# Path to store your wordpress files
WP_CORE=/wordpress/core/data
WP_CONTENT=/wordpress/wp-content/data
# Table prefix
WORDPRESS_TABLE_PREFIX=wp_
# Your domain (or domains)
DOMAINS=mydomain.com,www.mydomian.com
# Your email for Let's Encrypt register
<EMAIL_ADDRESS>
Here is my wordpress uploadsize.ini file:
file_uploads = On
memory_limit = 3000M
upload_max_filesize = 1000M
post_max_size = 2000M
max_execution_time = 1000
I assume you have fixed that... if not open this issue again and comment.
Thanks!
Sorry, I haven't been able to get back to this until now. Yes, that did fix it. Thank you! +1:
Hi i forgot to add the conf.d folder. I added it, but now I get the folowing error:
ERROR: for wordpress Cannot start service wordpress: OCI runtime create failed: container_linux.go:348: starting container process caused "process_linux.go:402: container init caused \"rootfs_linux.go:58: mounting \\\"/home/*/*/*/conf.d/uploadsize.ini\\\" to rootfs \\\"/var/lib/docker/overlay2/*/merged\\\" at \\\"/var/lib/docker/overlay2/*/merged/usr/local/etc/php/conf.d/uploadsize.ini\\\" caused \\\"not a directory\\\"\"": unknown: Are you trying to mount a directory onto a file (or vice-versa)? Check if the specified host path exists and is the expected type
How can I fix this?
@Koerner can you show how your docker-compose is looking?
`version: '3'
services:
db:
container_name: ${CONTAINER_DB_NAME}
image: mariadb:latest
restart: unless-stopped
volumes:
- ${DB_PATH}:/var/lib/mysql
environment:
MYSQL_ROOT_PASSWORD: ${MYSQL_ROOT_PASSWORD}
MYSQL_DATABASE: ${MYSQL_DATABASE}
MYSQL_USER: ${MYSQL_USER}
MYSQL_PASSWORD: ${MYSQL_PASSWORD}
wordpress:
depends_on:
- db
container_name: ${CONTAINER_WP_NAME}
image: wordpress:latest
restart: unless-stopped
volumes:
- ${WP_CORE}:/var/www/html
- ${WP_CONTENT}:/var/www/html/wp-content
- ./conf.d/uploadsize.ini:/usr/local/etc/php/conf.d/uploadsize.ini
environment:
WORDPRESS_DB_HOST: ${CONTAINER_DB_NAME}:3306
WORDPRESS_DB_NAME: ${MYSQL_DATABASE}
WORDPRESS_DB_USER: ${MYSQL_USER}
WORDPRESS_DB_PASSWORD: ${MYSQL_PASSWORD}
WORDPRESS_TABLE_PREFIX: ${WORDPRESS_TABLE_PREFIX}
VIRTUAL_HOST: ${DOMAINS}
LETSENCRYPT_HOST: ${DOMAINS}
LETSENCRYPT_EMAIL: ${LETSENCRYPT_EMAIL}
networks:
default:
external:
name: ${NETWORK}`
I have an update on this.
I found out why I was having an issue in the first place.
I was installing the nginx files under the root user here: /nginx/data
Then I installed wordpress under the sudo user here: /home/myuser/wordpress_site
Therefore, I believe it was a permission problem. Wordpress wasn't able to access the webproxy's config files to read the uploadsize.conf file because it didn't have root permission.
The fix: I installed both the webproxy and wordpress under the sudo user and I had no more problems.
I hope this helps someone else.
Or you could have a www-data owner and group for the wp files.
|
2025-04-01T06:38:37.406257
| 2021-02-25T14:20:27
|
816481001
|
{
"authors": [
"martinbianchi",
"osdiab"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:5747",
"repo": "everydotorg/donate-button",
"url": "https://github.com/everydotorg/donate-button/pull/116"
}
|
gharchive/pull-request
|
feat: use bigger images in example
@osdiab I saw the issue #101. The reason of why the images doesn't look very well is because they don't have a good aspect ratio. I changed the images and now they are shown as it should be, for sure that we can improve this designs but at least with this the images are shown as expected
border radius definitely helps!
|
2025-04-01T06:38:37.422984
| 2021-08-30T23:08:17
|
983280990
|
{
"authors": [
"hughess"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:5748",
"repo": "evidence-dev/evidence",
"url": "https://github.com/evidence-dev/evidence/issues/110"
}
|
gharchive/issue
|
Support for plotting multiple columns in LineChart
LineChart component currently requires tidy data to plot multiple lines. Add support for plotting multiple columns (i.e. 1 column per measure and plotting each).
Syntax could be something like <LineChart x=date y=(measure1, measure2)/>
Needs to be able to apply color palette in same way as series argument.
Should probably use an array as the multi-column argument:
[column_a, column_b]
Can also expose arguments for series name:
[“Column A”, “Column B”]
And lineColor:
[#4287f5, black]
In addition to any other line formatting arguments. If only one color or style is provided but there are multiple series, apply that style to all series. If no styles provided, use multi-series color palette as normal
|
2025-04-01T06:38:37.434239
| 2017-08-09T19:59:46
|
249144198
|
{
"authors": [
"cmungall",
"mchibucos",
"rctauber"
],
"license": "CC0-1.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:5749",
"repo": "evidenceontology/evidenceontology",
"url": "https://github.com/evidenceontology/evidenceontology/issues/149"
}
|
gharchive/issue
|
Subset URLs in ECO have changed
In the 2017-07-19 release, the structure of the subset URLs changed
e.g.
http://purl.obolibrary.org/obo/eco#go_groupings
->
http://purl.obolibrary.org/obo/eco/go_groupings
I can see the motivation for changing: the old URLs were ugly. However, unfortunately we were depending on these subsets for our amigo load (see #93). As the urls changed without warning, the classes effectively slipped out of the subset for us, causing this problem: https://github.com/geneontology/amigo/issues/433
In future can you give us advance warning so we can change our configurations? Thanks.
It may be easiest for you to stack with what you have for now (I recommend having the PURLs resolve) but I defer to @ktlm here.
Note that the change also had consequences for obo-format users - you can see for yourself in the obo version of the file.
The URI to shortform mappings for subsets is a little opaque, see http://owlcollab.github.io/oboformat/doc/obo-syntax.html
section 5.9.2
Basically 'non-canonical' identifiers like 'goslim_foo' get given a hash IRI using the ontology base IRI. There were reasons for this to do with unambiguous roundtripping.
Although more people are abandoning obo, unfortunately many of the consumers of eco still use it. I advise doing a diff between obo version with each release. If anything looks odd or changes unexpectedly then hold off and consult us.
Thanks Chris. Very helpful. Will do.
On Aug 9, 2017, at 4:02 PM, Chris Mungall<EMAIL_ADDRESS>wrote:
Note that the change also had consequences for obo-format users - you can see for yourself in the obo version of the file.
The URI to shortform mappings for subsets is a little opaque, see http://owlcollab.github.io/oboformat/doc/obo-syntax.html
section 5.9.2
Basically 'non-canonical' identifiers like 'goslim_foo' get given a hash IRI using the ontology base IRI. There were reasons for this to do with unambiguous roundtripping.
Although more people are abandoning obo, unfortunately many of the consumers of eco still use it. I advise doing a diff between obo version with each release. If anything looks odd or changes unexpectedly then hold off and consult us.
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub, or mute the thread.
OK, I consulted with @kltm, this is actually a bit more problematic for us than I thought.
@mchibucos - Can we request an ASAP new release of ECO with the URLs for subsets reverted back (I can make the PR if you like). This will give us some time to make necessary software changes. We can then switch to your preferred URLs in 1-2 months, and perhaps coordinate this with wider implemented best practices across OBO.
Hi @cmungall - I apologize for any issues this caused; this was actually something that I missed in a merge that changed the namespace from eco# to eco/. I'll fix it and release now.
Affirmative on all. @rctauber
On Aug 9, 2017, at 4:30 PM, Chris Mungall<EMAIL_ADDRESS>wrote:
OK, I consulted with @kltm, this is actually a bit more problematic for us than I thought.
@mchibucos - Can we request an ASAP new release of ECO with the URLs for subsets reverted back (I can make the PR if you like). This will give us some time to make necessary software changes. We can then switch to your preferred URLs in 1-2 months, and perhaps coordinate this with wider implemented best practices across OBO.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub, or mute the thread.
Not released yet....
Sorry Chris. I was under the impression this was fixed. @rctauber any insights?
Release is live, thanks for your patience
@cmungall is everything OK with this now? Let me know if there's anything else that needs to be fixed. Thanks!
Thanks!
|
2025-04-01T06:38:37.445223
| 2017-12-28T14:48:39
|
284929227
|
{
"authors": [
"brunocodutra",
"craigpryde"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:5750",
"repo": "evilebottnawi/favicons",
"url": "https://github.com/evilebottnawi/favicons/pull/193"
}
|
gharchive/pull-request
|
file.isSymbolic update vinyl package to v2.1.0
#192
Hi @evilebottnawi,
I have adjusted the package.json file to reference the new v2.1.0 of vinyl which fixes the issue of error file.isSymbolic within gulp after the release of alpha 3.
I ran the test which completes successfully. Although I discovered that the tests fail on the data returned from the request function of the helpers-es5.js file with the following message:
}) : callback(result.result.error_message);
^
TypeError: Cannot read property 'result' of undefined
at /Users/craigpryde/Documents/Websites/favicons/helpers-es5.js:476:46
at Object.handleResponse (/Users/craigpryde/Documents/Websites/favicons/node_modules/node-rest-client/lib/node-rest-client.js:448:5)
at Object.handleEnd (/Users/craigpryde/Documents/Websites/favicons/node_modules/node-rest-client/lib/node-rest-client.js:421:10)
at IncomingMessage.<anonymous> (/Users/craigpryde/Documents/Websites/favicons/node_modules/node-rest-client/lib/node-rest-client.js:587:13)
at emitNone (events.js:111:20)
at IncomingMessage.emit (events.js:208:7)
at endReadableNT (_stream_readable.js:1056:12)
at _combinedTickCallback (internal/process/next_tick.js:138:11)
at process._tickCallback (internal/process/next_tick.js:180:9)
This was happening in the original master version that I cloned without making any updates. Unfortunately, i don't have the time to investigate the realfavicongenerator API to assist in debugging this other issue.
let me know your thoughts
Cheers
Craig
Thanks for the patch, it shipped in v5.0!
|
2025-04-01T06:38:37.465869
| 2016-01-03T09:52:51
|
124640363
|
{
"authors": [
"caboose0013",
"evitalis"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:5751",
"repo": "evitalis/steamclean",
"url": "https://github.com/evitalis/steamclean/issues/6"
}
|
gharchive/issue
|
File not found if default steam does not exist
I don't have any games install on the c drive eq. C:\Program Files (x86)\Steam\SteamApps\common does not exist and the script fails:
Checking C:\Program Files (x86)\Steam\SteamApps\common
Traceback (most recent call last):
File "", line 368, in
File "", line 139, in find_redist
FileNotFoundError: [WinError 3] The system cannot find the path specified: 'C:\Program Files(x86)\Steam\SteamApps\common'
steamclean returned -1
Can you check the file explorer for the path to ensure it is valid please? On my system I also did not install anything to C:\ as that is my SSD which I do not wish to clutter up.
C:\Program Files (x86)\Steam\steamapps\common>dir
Volume in drive C has no label.
Volume Serial Number is XXXX-XXXX
Directory of C:\Program Files (x86)\Steam\steamapps\common
08/22/2015 12:09 <DIR> .
08/22/2015 12:09 <DIR> ..
0 File(s) 0 bytes
2 Dir(s) 98,086,707,200 bytes free
I manually created the common directory in steamspps on c:\ and it worked
just fine
On Jan 3, 2016 08:19, "evitalis"<EMAIL_ADDRESS>wrote:
Can you check the file explorer for the path to ensure it is valid please?
On my system I also did not install anything to C:\ as that is my SSD which
I do not wish to clutter up.
C:\Program Files (x86)\Steam\steamapps\common>dir
Volume in drive C has no label.
Volume Serial Number is XXXX-XXXX
Directory of C:\Program Files (x86)\Steam\steamapps\common
08/22/2015 12:09 .
08/22/2015 12:09 ..
0 File(s) 0 bytes
2 Dir(s) 98,086,707,200 bytes free
—
Reply to this email directly or view it on GitHub
https://github.com/evitalis/steamclean/issues/6#issuecomment-168501115.
I do not expect that it should have errored out though. I am going to set it as a bug for the moment and will review the code again. I might just be missing a check somewhere.
I added some extra tests for the missing or invalid directory in commit 14867e4117898. This should be resolved but will leave open until the next build is posted and tested.
v0.6.0 allows for less restrictions on directory names and additional logging was added which should resolve this. Merge commit cf1a0ba .
|
2025-04-01T06:38:37.468890
| 2018-05-25T00:35:08
|
326342800
|
{
"authors": [
"evollu",
"jnrepo"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:5752",
"repo": "evollu/react-native-fcm",
"url": "https://github.com/evollu/react-native-fcm/issues/947"
}
|
gharchive/issue
|
Possible incompatibility with react-native-firebase v4
Issue: clicking on the notification banner while the app in foreground will not trigger FCM.on(FCMEvent.Notification, (notif) => {}).
I was able to reproduce the issue I was having on the sample project. All I had to do was install react-native-firebase (4.2.0) and the issue would start happening. When I revert react-native-firebase to version (3.3.1) the issue doesn't seem to happen. I'm wondering at this point if its better to completely migrate to using Messaging from react-native-firebase or if there was a workaround for this issue?
if you are using react-native-firebase, I would recommend migrating to them. I have created a migration example project https://github.com/evollu/react-native-fcm/tree/firebase/Examples/firebase-migration
|
2025-04-01T06:38:37.475066
| 2021-11-05T17:34:28
|
1046085471
|
{
"authors": [
"scala-steward"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:5753",
"repo": "evolution-gaming/stracer",
"url": "https://github.com/evolution-gaming/stracer/pull/199"
}
|
gharchive/pull-request
|
Update cats-helper to 2.6.2
Updates com.evolutiongaming:cats-helper from 2.3.0 to 2.6.2.
GitHub Release Notes - Version Diff
I'll automatically update this PR to resolve conflicts as long as you don't change it yourself.
If you'd like to skip this version, you can just close this PR. If you have any feedback, just mention me in the comments below.
Configure Scala Steward for your repository with a .scala-steward.conf file.
Have a fantastic day writing Scala!
Ignore future updates
Add this to your .scala-steward.conf file to ignore future updates of this dependency:
updates.ignore = [ { groupId = "com.evolutiongaming", artifactId = "cats-helper" } ]
labels: library-update, early-semver-minor, semver-spec-minor
Superseded by #200.
|
2025-04-01T06:38:37.481702
| 2014-10-10T08:44:13
|
45458744
|
{
"authors": [
"divineprog",
"mazekeeper"
],
"license": "apache-2.0",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:5754",
"repo": "evothings/evothings-examples",
"url": "https://github.com/evothings/evothings-examples/issues/73"
}
|
gharchive/issue
|
Suggested to have splash screen for "confusing apps"
Some apps need a further info text in the form of a splash screen; e.g.
"this apps requires the TI Sensor tag"
"this is a companion app for the BLE On/Off example"
"this app needs an external gps"
et cetera when required. We don't want people to run an app,
nothing happens and in worst case they'd walk away
Suggestions proposed by Sionarch have been implemented in some of the examples, and should be done for the rest or the examples:
Descriptive title
Image of the device used by the example
When applicable update UI texts to mention the device used
|
2025-04-01T06:38:37.502535
| 2014-11-24T22:16:58
|
49950130
|
{
"authors": [
"Electronic-Junkie",
"ewolff",
"jevin36"
],
"license": "apache-2.0",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:5755",
"repo": "ewolff/user-registration",
"url": "https://github.com/ewolff/user-registration/issues/5"
}
|
gharchive/issue
|
Chef example (Chapter 3.3.2) doesn't work (Missing Role(s) in Run List)
solo.rb: root point to the correct directory
"sudo chef-solo -j node.json -c solo.rb" exits with the following error
[2014-11-24T23:13:44+01:00] ERROR: Role tomcatserver.json (included by 'top level') is in the runlist but does not exist. Skipping expand.
Error expanding the run_list:
Missing Role(s) in Run List:
tomcatserver.json included by 'top level'
Original Run List
role[tomcatserver.json]
[2014-11-24T23:13:44+01:00] FATAL: Stacktrace dumped to /home/testserver/user-registration/chef/chef-stacktrace.out
Chef Client failed. 0 resources updated in 6.273272457 seconds
[2014-11-24T23:13:44+01:00] ERROR: The expanded run list includes nonexistent roles: tomcatserver.json
[2014-11-24T23:13:44+01:00] FATAL: Chef::Exceptions::ChildConvergeError: Chef run process exited unsuccessfully (exit code 1)
In Version 2 you need to change root = '/home/ubuntu/user-registration-V2/chef' into all lower case '/home/ubuntu/user-registration-v2/chef' Otherwise you will receive the same error message.
@Electronic-Junkie This is actually expected. I clarified the documentation, see https://github.com/ewolff/user-registration-V2/tree/master/chef#chef-solo-on-ubuntu-1604 . Please open an issue in https://github.com/ewolff/user-registration-V2 if you feel this does not solve the problem.
|
2025-04-01T06:38:37.507583
| 2024-08-28T17:11:21
|
2492613555
|
{
"authors": [
"WardLT",
"braceal"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:5756",
"repo": "exalearn/colmena",
"url": "https://github.com/exalearn/colmena/pull/142"
}
|
gharchive/pull-request
|
Address retry bug for HighThroughputExecutor.
This PR addresses a bug in the task retry logic causing it to fail for HighThroughputExecutor.
Updates
The main update was to resolve the result future prior to checking it's failure info.
The previous release only tested it for ThreadPoolExecutor, however the result lives in shared memory in that case so the failure info was present. Using HighThroughputExecutor the future result object needs to be used.
Address CI issues.
I'm taking a look at this soon. Fixing up some flake8 issues first
Thanks, @braceal !
|
2025-04-01T06:38:37.509724
| 2022-12-09T11:21:20
|
1486581313
|
{
"authors": [
"apahl",
"arshajii"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:5757",
"repo": "exaloop/codon",
"url": "https://github.com/exaloop/codon/pull/78"
}
|
gharchive/pull-request
|
Update generators.md
The output of the generator example is the other way round.
Thanks for the PR -- I updated that example in https://github.com/exaloop/codon/pull/85/commits/91272b1580b0face2bd816ab76e55879e397367b. in this case I think it makes more sense to modify the gen function itself as the current version is a bit unintuitive, which is what's done in that commit.
Thank you for this very exciting project! I really hope, it takes off.
|
2025-04-01T06:38:37.544376
| 2019-04-23T07:50:36
|
436046776
|
{
"authors": [
"cuibty",
"niemyjski",
"witskeeper"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:5759",
"repo": "exceptionless/Exceptionless",
"url": "https://github.com/exceptionless/Exceptionless/issues/415"
}
|
gharchive/issue
|
any changes in es index between 4.x and the .net core version?
i am using the .net framework verion which is 4.x
if i upgrade to the lastest .net core version.
do i need to do something in elasticsearch index ?
There should be no changes to the Elasticsearch indexes for 5.x, you'll just need to run the docker containers that are currently up with our ci label. We're finalizing the docs and then well be pushing the official feeds. Please let us know if you have any questions.
good job,that’s good news for everyone who want to upgrade to .net core version.
+1
The only changes needed are to host the site in docker and point elastic connection string to your existing elastic instance :). See this for more information: https://github.com/exceptionless/Exceptionless/wiki/Self-Hosting-Docker
|
2025-04-01T06:38:37.547411
| 2019-11-10T21:11:00
|
520658133
|
{
"authors": [
"ash-jc-allen",
"madisvain"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:5760",
"repo": "exchangeratesapi/exchangeratesapi",
"url": "https://github.com/exchangeratesapi/exchangeratesapi/issues/66"
}
|
gharchive/issue
|
No weekend exchange rates?
Hi,
I'm not entirely sure if this is an issue or just me not understanding something. I can't seem to get any exchange rates for dates on weekends. For example, using the following URL for 2019-11-10, I am returned the results for 2019-11-08:
https://api.exchangeratesapi.io/2019-11-10?&base=EUR
{
"rates": {
"CAD": 1.4561,
"HKD": 8.6372,
"ISK": 137.7,
"PHP": 55.809,
"DKK": 7.4727,
"HUF": 333.37,
"CZK": 25.486,
"AUD": 1.6065,
"RON": 4.7638,
"SEK": 10.7025,
"IDR": 15463.05,
"INR": 78.652,
"BRL": 4.5583,
"RUB": 70.4653,
"HRK": 7.4345,
"JPY": 120.72,
"THB": 33.527,
"CHF": 1.0991,
"SGD": 1.5002,
"PLN": 4.261,
"BGN": 1.9558,
"TRY": 6.3513,
"CNY": 7.7115,
"NOK": 10.0893,
"NZD": 1.7426,
"ZAR": 16.3121,
"USD": 1.1034,
"MXN": 21.1383,
"ILS": 3.8533,
"GBP": 0.86158,
"KRW": 1276.66,
"MYR": 4.5609
},
"base": "EUR",
"date": "2019-11-08"
}
Thanks in advance :)
Hello - This API exposes the exchange rates from the European Central Bank. Those are reported only on workdays.
You can read more about it on https://www.ecb.europa.eu/stats/policy_and_exchange_rates/euro_reference_exchange_rates/html/index.en.html
|
2025-04-01T06:38:37.590612
| 2023-04-11T08:44:29
|
1662028204
|
{
"authors": [
"gomain",
"vaeng"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:5761",
"repo": "exercism/cpp",
"url": "https://github.com/exercism/cpp/issues/589"
}
|
gharchive/issue
|
Concept for Basics
Add a concept exercise for the Basic concept. It should cover:
Variable Declaration
Variable Initialization
Calling Functions
Comments
[x] Pick a fitting exercise
[x] Add code files
[x] Add documentation files
[ ] Update the list of implemented exercises
[x] Get feedback
[x] Implement Feedback
The exercise of choice shall be Lasagna. It is widely adopted and covers everything we need.
I still have to write about.md and think about how it should be different to the introduction.md.
One point is whitespace. I think it is important to know how it is handled in c++, but not necessary before the first concept.
One of the hardest things to grasp about (any) languages is its relation to its runtime. I.e. the machine. C++ (as with C) is an imperative language, it has statements that instructs the machine to do things. We're not talking of IO, but statements such as int i{1}; is an instruction. Not merely some idea of a name bound with a value. Most (if not all that I came across) language tutorials quickly skip this core fact, try to lay down fundamental constructs of the language and move on to a game of problem solving given those tools. This grows into a game of framework usage tweaking trying to achieve some application result. This is all high level and desired (most of the time) but that missing fundamental link makes all code magic. Say the write word, and it will (magically) happen.
|
2025-04-01T06:38:37.610540
| 2021-01-29T14:00:15
|
796905425
|
{
"authors": [
"ErikSchierboom",
"Pamplemousse"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:5762",
"repo": "exercism/nix",
"url": "https://github.com/exercism/nix/issues/10"
}
|
gharchive/issue
|
Add tags
This issue is part of the migration to v3. You can read full details about the various changes here.
In Exercism v3, tracks can be annotated with tags. This allows searching for tracks with a certain tag combination, making it easy for students to find an interesting track to join.
Tags are specified in the top-level "tags" field in the track's config.json file and are defined as an array of strings, as specified in the spec.
Goal
The "tags" field in the config.json file should be updated to contain the tags that are relevant to this track. The list of tags that can be used is listed in the spec.
Example
{
"tags": [
"runtime/jvm",
"platform/windows",
"platform/linux",
"paradigm/declarative",
"paradigm/functional",
"paradigm/object_oriented"
]
}
Tracking
https://github.com/exercism/v3-launch/issues/1
This seems to have been tackled in #47 .
|
2025-04-01T06:38:37.634784
| 2022-04-06T18:35:41
|
1194989687
|
{
"authors": [
"RTurek",
"kotp",
"simonbacquie"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:5763",
"repo": "exercism/ruby",
"url": "https://github.com/exercism/ruby/issues/1311"
}
|
gharchive/issue
|
Update Instructions For "Twelve Days"
Overview
We should update the instructions of the "Twelve Days" to better explain what the exercise is asking the student to do. I listed specific things to focus on in the checklist below. I'll also be opening a PR with my suggestions for this change.
https://exercism.org/tracks/ruby/exercises/twelve-days
Details
My team uses this exercise as a code challenge for developer candidates from several different countries. I've learned a lot from watching people see this exercise for the first time. The current instructions of Output the lyrics to 'The Twelve Days of Christmas'. do not seem to be adequate. We can add some additional text to explain what is going on in this challenge, so non-native English speakers and non-Christians have an easier time understanding what to do.
One specific issue I have is that I think the spirit of the "Twelve Days" challenge implies that you should not just be reading and outputting the available text file (example of this), you should be writing an algorithm that programmatically generates the song. The text file should just be available for the test runner to compare to the output of TwelveDays.song. If we cannot prevent the student from just reading and returning the contents of the text file in their song method, then we should at least suggest in the instructions that they should be generating the song themselves and focusing on a few specific things when doing so, for learning purposes.
TODO
[ ] Update Instructions.md
[ ] To prompt the student with some more help to get started
[ ] Give some thought to the cultural unfamiliarity of this song - not everyone just "gets it" when seeing this for the first time - I run into this when using this exercise in interviews with non-western non-christian candidates who have never heard of it before
[ ] Write test to assert that the text file is not being read by the TwelveDays class
[ ] expect File not_to receive :read or something like that - not sure if this is possible. The test is a nice-to-have for me, I'm mostly interested in adding copy to the instructions that discourage the student from trying to just output the same exact file the test runner is using.
# Instructions
Your task in this exercise is to write a method that returns the lyrics of the song: 'The Twelve Days of Christmas'.
"The Twelve Days of Christmas" is a common Christmas carol. Each subsequent verse of the song builds on the previous verse.
Each verse of the song has several elements that are repeated from verse to verse. Identify these elements and try to re-use them when writing your code. Remember to be "DRY" (Don't Repeat Yourself)!
The lyrics your method returns should exactly match the text shown below.
# The Twelve Days Of Christmas
```text
On the first day of Christmas my true love gave to me: a Partridge in a Pear Tree.
On the second day of Christmas my true love gave to me: two Turtle Doves, and a Partridge in a Pear Tree.
On the third day of Christmas my true love gave to me: three French Hens, two Turtle Doves, and a Partridge in a Pear Tree.
On the fourth day of Christmas my true love gave to me: four Calling Birds, three French Hens, two Turtle Doves, and a Partridge in a Pear Tree.
On the fifth day of Christmas my true love gave to me: five Gold Rings, four Calling Birds, three French Hens, two Turtle Doves, and a Partridge in a Pear Tree.
On the sixth day of Christmas my true love gave to me: six Geese-a-Laying, five Gold Rings, four Calling Birds, three French Hens, two Turtle Doves, and a Partridge in a Pear Tree.
On the seventh day of Christmas my true love gave to me: seven Swans-a-Swimming, six Geese-a-Laying, five Gold Rings, four Calling Birds, three French Hens, two Turtle Doves, and a Partridge in a Pear Tree.
On the eighth day of Christmas my true love gave to me: eight Maids-a-Milking, seven Swans-a-Swimming, six Geese-a-Laying, five Gold Rings, four Calling Birds, three French Hens, two Turtle Doves, and a Partridge in a Pear Tree.
On the ninth day of Christmas my true love gave to me: nine Ladies Dancing, eight Maids-a-Milking, seven Swans-a-Swimming, six Geese-a-Laying, five Gold Rings, four Calling Birds, three French Hens, two Turtle Doves, and a Partridge in a Pear Tree.
On the tenth day of Christmas my true love gave to me: ten Lords-a-Leaping, nine Ladies Dancing, eight Maids-a-Milking, seven Swans-a-Swimming, six Geese-a-Laying, five Gold Rings, four Calling Birds, three French Hens, two Turtle Doves, and a Partridge in a Pear Tree.
On the eleventh day of Christmas my true love gave to me: eleven Pipers Piping, ten Lords-a-Leaping, nine Ladies Dancing, eight Maids-a-Milking, seven Swans-a-Swimming, six Geese-a-Laying, five Gold Rings, four Calling Birds, three French Hens, two Turtle Doves, and a Partridge in a Pear Tree.
On the twelfth day of Christmas my true love gave to me: twelve Drummers Drumming, eleven Pipers Piping, ten Lords-a-Leaping, nine Ladies Dancing, eight Maids-a-Milking, seven Swans-a-Swimming, six Geese-a-Laying, five Gold Rings, four Calling Birds, three French Hens, two Turtle Doves, and a Partridge in a Pear Tree.
One idea I had was to add an extra test case, ensuring that the solution does not call any of the typical Ruby methods that would read from a file
Because Twelve Days exercise is not only for Ruby, these suggestions should be discussed in the problem-specifications repository, and so I am going to transfer this to there.
|
2025-04-01T06:38:37.670062
| 2018-01-07T12:47:07
|
286566540
|
{
"authors": [
"exodus4d",
"levialex"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:5764",
"repo": "exodus4d/pathfinder",
"url": "https://github.com/exodus4d/pathfinder/issues/575"
}
|
gharchive/issue
|
ERROR 502: Bad Gateway
Howdy folks!
I'm looking for some help with an issue that I currently have with Pathfinder.
Every time I try to log in with my account, I get the following error: http://prntscr.com/hx6n0c
If I just press "Restart", the page reloads, I log in again, and I get the very same error again, and again and again, no matter how many times I try this.
Is there any way to fix this issue?
Thank you!
@levialex , as @dkrotil already said. We can´t provide support for self hosted installations without having deeper knowledge about the setup/server.
If there is something buggy and reproducible on the public installation please let us know.
If you can get more information about the server setup regarding your issue please let us know.
|
2025-04-01T06:38:37.704448
| 2023-11-16T12:21:05
|
1996738026
|
{
"authors": [
"NaveenKumarStark",
"rmitsch"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:5765",
"repo": "explosion/spacy-llm",
"url": "https://github.com/explosion/spacy-llm/issues/372"
}
|
gharchive/issue
|
Spacy-llm bedrock issue
I am getting the below error
[2023-11-16 17:39:57,111] ERROR in app: Exception on /watsonnlu [POST]
Traceback (most recent call last):
File "/Users/Naveen/04102023/assistant_nlu_service_proprietary/nav-spacy-test/lib/python3.9/site-packages/flask/app.py", line 2077, in wsgi_app
response = self.full_dispatch_request()
File "/Users/Naveen/04102023/assistant_nlu_service_proprietary/nav-spacy-test/lib/python3.9/site-packages/flask/app.py", line 1525, in full_dispatch_request
rv = self.handle_user_exception(e)
File "/Users/Naveen/04102023/assistant_nlu_service_proprietary/nav-spacy-test/lib/python3.9/site-packages/flask/app.py", line 1523, in full_dispatch_request
rv = self.dispatch_request()
File "/Users/Naveen/04102023/assistant_nlu_service_proprietary/nav-spacy-test/lib/python3.9/site-packages/flask/app.py", line 1509, in dispatch_request
return self.ensure_sync(self.view_functions[rule.endpoint])(**req.view_args)
File "/Users/Naveen/04102023/assistant_nlu_service_proprietary/nlu_service.py", line 62, in message
resp = predict(data["input"], data["bot_id"])
File "/Users/Naveen/04102023/assistant_nlu_service_proprietary/nlu/assistant_nlu.py", line 250, in predict
intent_result=predict_intent(bot_id,user_query,sql_connection,language,session,Is_FAQ_present)
File "/Users/Naveen/04102023/assistant_nlu_service_proprietary/nlu/predict_intent.py", line 498, in predict_intent
spacy_llm_output_list=predict_spacy_llm(user_query,file_name)
File "/Users/Naveen/04102023/assistant_nlu_service_proprietary/nlu/predict_intent.py", line 349, in predict_spacy_llm
Spacy_LLM_model = assemble(config_file_path)
File "/Users/Naveen/04102023/assistant_nlu_service_proprietary/nav-spacy-test/lib/python3.9/site-packages/spacy_llm/util.py", line 48, in assemble
return assemble_from_config(config)
File "/Users/Naveen/04102023/assistant_nlu_service_proprietary/nav-spacy-test/lib/python3.9/site-packages/spacy_llm/util.py", line 28, in assemble_from_config
nlp = load_model_from_config(config, auto_fill=True)
File "/Users/Naveen/04102023/assistant_nlu_service_proprietary/nav-spacy-test/lib/python3.9/site-packages/spacy/util.py", line 587, in load_model_from_config
nlp = lang_cls.from_config(
File "/Users/Naveen/04102023/assistant_nlu_service_proprietary/nav-spacy-test/lib/python3.9/site-packages/spacy/language.py", line 1847, in from_config
nlp.add_pipe(
File "/Users/Naveen/04102023/assistant_nlu_service_proprietary/nav-spacy-test/lib/python3.9/site-packages/spacy/language.py", line 814, in add_pipe
pipe_component = self.create_pipe(
File "/Users/Naveen/04102023/assistant_nlu_service_proprietary/nav-spacy-test/lib/python3.9/site-packages/spacy/language.py", line 702, in create_pipe
resolved = registry.resolve(cfg, validate=validate)
File "/Users/Naveen/04102023/assistant_nlu_service_proprietary/nav-spacy-test/lib/python3.9/site-packages/confection/__init__.py", line 756, in resolve
resolved, _ = cls._make(
File "/Users/Naveen/04102023/assistant_nlu_service_proprietary/nav-spacy-test/lib/python3.9/site-packages/confection/__init__.py", line 805, in _make
filled, _, resolved = cls._fill(
File "/Users/Naveen/04102023/assistant_nlu_service_proprietary/nav-spacy-test/lib/python3.9/site-packages/confection/__init__.py", line 860, in _fill
filled[key], validation[v_key], final[key] = cls._fill(
File "/Users/Naveen/04102023/assistant_nlu_service_proprietary/nav-spacy-test/lib/python3.9/site-packages/confection/__init__.py", line 877, in _fill
getter_result = getter(*args, **kwargs)
File "/Users/Naveen/04102023/assistant_nlu_service_proprietary/nav-spacy-test/lib/python3.9/site-packages/spacy_llm/models/langchain/model.py", line 90, in langchain_model
return LangChain(
File "/Users/Naveen/04102023/assistant_nlu_service_proprietary/nav-spacy-test/lib/python3.9/site-packages/spacy_llm/models/langchain/model.py", line 34, in __init__
self._langchain_model = LangChain.get_type_to_cls_dict()[api](
File "/Users/Naveen/04102023/assistant_nlu_service_proprietary/nav-spacy-test/lib/python3.9/site-packages/langchain/load/serializable.py", line 97, in __init__
super().__init__(**kwargs)
File "/Users/Naveen/04102023/assistant_nlu_service_proprietary/nav-spacy-test/lib/python3.9/site-packages/pydantic/v1/main.py", line 341, in __init__
raise validation_error
pydantic.v1.error_wrappers.ValidationError: 1 validation error for Bedrock
model_name
extra fields not permitted (type=value_error.extra)
where my config file is as follows,
[nlp]
lang = "en"
pipeline = ["llm"]
batch_size = 128
[components]
[components.llm]
factory = "llm"
[components.llm.model]
@llm_models = "langchain.Bedrock.v1"
name = "amazon.titan-text-express-v1"
config = {"model_id": "amazon.titan-text-express-v1"}
[components.llm.task]
@llm_tasks = "spacy.TextCat.v2"
labels = What_can_you_do, Unlock_Account, More_questions
exclusive_classes = false
[components.llm.task.examples]
@misc = "spacy.FewShotReader.v1"
path = "models/nav_testing_3fed4208-dad-4635-8419-b4kumar002_selective_examples.jsonl"
[components.llm.task.normalizer]
@misc = "spacy.LowercaseNormalizer.v1"
Hi @NaveenKumarStark, please always format your code and console output. Thanks for bringing this up, langchain seems to expect different keyword args for Bedrock than for other models. We'll patch this up shortly.
|
2025-04-01T06:38:37.710390
| 2022-09-05T10:48:29
|
1361749227
|
{
"authors": [
"Simek"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:5766",
"repo": "expo/eas-cli",
"url": "https://github.com/expo/eas-cli/pull/1341"
}
|
gharchive/pull-request
|
[eas-cli] bump oclif help plugin
Checklist
[X] I've added an entry to CHANGELOG.md if necessary. You can comment this pull request with /changelog-entry [breaking-change|new-feature|bug-fix|chore] [message] and CHANGELOG.md will be updated automatically.
Why
Refs ENG-6204
How
This PR bumps our fork of the oclif help plugin, which includes corrected description for the help command in the help prompt
Test Plan
The change has been tested locally by running yarn eas.
Preview
/changelog-entry bug-fix Fix description of help command in help prompt.
|
2025-04-01T06:38:37.714588
| 2023-01-09T10:32:45
|
1525314811
|
{
"authors": [
"szdziedzic"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:5767",
"repo": "expo/eas-cli",
"url": "https://github.com/expo/eas-cli/pull/1614"
}
|
gharchive/pull-request
|
[eas-cli] validate chosen build in eas build:run command
Checklist
[ ] I've added an entry to CHANGELOG.md if necessary. You can comment this pull request with /changelog-entry [breaking-change|new-feature|bug-fix|chore] [message] and CHANGELOG.md will be updated automatically.
Why
When adding a paginated select prompt to the eas build:run I introduced a new bug.
When you have no emulator/simulator builds in your project you will get this strange assertion error message.
EDIT: I noticed that I wasn't sanitizing builds selected by id or as latest. This is also fixed now.
Now I sanitize every selected build to make sure it is valid in the context of the eas build:run command.
How
Throw a custom error if there are no simulator/emulator builds for a given platform.
Test Plan
Manual tests.
/changelog-entry bug-fix Validate chosen build in the eas build:run command
|
2025-04-01T06:38:37.735436
| 2022-08-04T20:02:53
|
1329120244
|
{
"authors": [
"EvanBacon",
"matallui"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:5768",
"repo": "expo/expo-electron-adapter",
"url": "https://github.com/expo/expo-electron-adapter/issues/19"
}
|
gharchive/issue
|
[question] Is this project being maintained?
Hi! I'm currently working on a new Expo project and would like to be able to deploy to desktop via Electron at some point.
I've found this project, but it looks like there's not been anyone maintaining it since its creation (15 months ago).
Are there any plans by Expo to improve support for Electron in the future?
We are moving towards a unified bundling process with Metro, but likewise, we also want to use a unified native runtime as well. Mac Catalyst enables React iOS to run natively on macOS machines, and React Native Windows supports the same on windows. We don't have the bandwidth to support any more platforms outside of iOS, Android, and web right now but if we did, it would be proper native.
This package is a light wrapper around electron-webpack if anyone wants to go more upstream.
Fair enough! I do agree it would be awesome to support desktop native platforms. However, I feel like the problem with that (and I noticed the same in frameworks like Flutter) is that most native packages you'll use in your app won't support desktop platforms, so even if Expo and RN supported those fully, it could still be a problem.
I see Electron as a "quick" win in wrapping the web app into a desktop app, and packages for Node are much easier to find.
So, at least for our first release, we'd like to try and get a quick win here by using electron to build the desktop version of our apps, and later on maybe consider switching to native.
|
2025-04-01T06:38:37.837677
| 2021-12-07T08:00:41
|
1073034423
|
{
"authors": [
"byCedric",
"sangameshsomawar"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:5769",
"repo": "expo/snack",
"url": "https://github.com/expo/snack/issues/238"
}
|
gharchive/issue
|
ThemeProvider is not working for external packages.
Summary
I created a component library on top of styled-components.
Library: https://github.com/sangameshsomawar/test
Npm: https://www.npmjs.com/package/@sangameshsomawar/test
When I install the library in any new react-native project or expo project, the theme is getting passed correctly to the external package, and colors are properly rendered. But when I test the same code in snack, the theme is not working. This issue is only happening in the snack.
Please find below snack examples:
Output when Typography Component is written directly in snack
Output when Typography code is fetched from the library and then referred in snack
Snack<EMAIL_ADDRESS>Snack Link<EMAIL_ADDRESS>
I think context is not getting passed correctly in the snack.
What platform(s) does this occur on?
Android, iOS, Web
SDK Version
No response
Reproducible demo or steps to reproduce from a blank project
see above
@IjzerenHein Please Help.
Hi @sangameshsomawar, your library is a full bare react native app. It contains another react native instance which is causing issues here. If you want to add this lib to your project in Snack, you have to create a package with just the JS files you want, not the whole app. (e.g. only the files in lib)
Hope this helps.
@byCedric: I have tried that as well. Let me create a snack example with that approach. I will share it with you here.
|
2025-04-01T06:38:37.840125
| 2024-10-29T14:08:15
|
2621379945
|
{
"authors": [
"byCedric"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:5770",
"repo": "expo/snack",
"url": "https://github.com/expo/snack/pull/614"
}
|
gharchive/pull-request
|
refactor: upgrade to Expo SDK 52
Why
This is a maintenance release, to upgrade to SDK 52. Note, this PR does not turn SDK 52 into the default SDK yet. We still need more testing for that.
[!IMPORTANT]
This PR should be merged after upgrading to the first stable SDK 52 version.
How
See commits, followed the upgrade guide.
Test Plan
See staging.
Going to merge this, and open a PR to upgrade the snack-runtime package separately.
|
2025-04-01T06:38:37.862095
| 2018-06-10T01:21:41
|
330931804
|
{
"authors": [
"dougwilson",
"pYr0x"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:5771",
"repo": "expressjs/compression",
"url": "https://github.com/expressjs/compression/issues/142"
}
|
gharchive/issue
|
chrome lighthouse text compression
i have searched the closed issues and found some others that have the same problem. but no solution
i enabled compression with
app.use(compression({ threshold: 0 }));
chrome networktab says: no compression, no content-encoding.
HTTP/1.1 200 OK
X-Powered-By: Express
Accept-Ranges: bytes
Cache-Control: public, max-age=0
Last-Modified: Sun, 10 Jun 2018 00:30:44 GMT
ETag: W/"1a526-163e71a33c0"
Content-Type: text/css; charset=UTF-8
Vary: Accept-Encoding
Date: Sun, 10 Jun 2018 01:14:59 GMT
Connection: keep-alive
Transfer-Encoding: chunked
chrome lighthouse extension says: Enable Text Compression!
firefox networktab: no content-encoding on the response
i turned debugging on set DEBUG=compression
i see on every request:
compression gzip compression +1m
GET /dist/bundles/medicalpad/index.css 200 1.590 ms - -
if i used curl -i --compressed http://localhost:3000/dist/bundles/medicalpad/index.css
i see
TP/1.1 200 OK
X-Powered-By: Express
Accept-Ranges: bytes
Cache-Control: public, max-age=0
Last-Modified: Sun, 10 Jun 2018 00:30:44 GMT
ETag: W/"1a526-163e71a33c0"
Content-Type: text/css; charset=UTF-8
Vary: Accept-Encoding
Content-Encoding: gzip <<<-------
Date: Sun, 10 Jun 2018 01:13:40 GMT
Connection: keep-alive
Transfer-Encoding: chunked
so i dont know whats going on there.
i am using:
express 4.16.0
compression: 1.7.2
node: 8.11.2
i have made a demo https://github.com/pYr0x/express-gzip
even on that simple example, chrome and other browers dont show the Content-Encoding.
if i visit other global website e.g. https://www.nytimes.com/ i see a content-encoding: gzip so a bug in chrome can be excluded.
the only difference on the respone headers i found is that:
content-length is missing
and there is a Transfer-Encoding: chunked header.
can Transfer-Encoding: chunked be the reason why the Content-Encoding is not shown or ignored by chrome?
@dougwilson is the demo shown you the Content-Encoding? i am on windows 10 with chrome 67.0.3396.79
Thanks for providing a repo! I an getting Content-Encoding: gzip in the Chrome Console just as I would expect from your project:
you are on windows right?
what node and npm version do you have installed?
Yes, I am on Windows 10. I used Node.js 8.11.2 (installed fresh just for this) so we would be using the exact same Node.js to try and keep every as similar as possible from what you said so far.
So I have no glue what’s going on... :(
Same. Variations of this issue have been report many times but either it suddenly started working and the reported doesn't know why or I had to close due to no further progress. If I could reproduce it I could try to track it down, otherwise since you can reproduce it we're all awaiting to hear what the issue is that you're having and where the bug is located.
ok one last try:
can you disable all chrome extensions and try it with a clean chrome browser?
can you tell me the file size that chrome gets with gizp? i will compare that with the filesize i will get.
Hi @pYr0x sorry I didn't get back to you earlier. I just tried out your additional suggestions and everything is still working fine for me.
I did the following:
Installed a brand new Windows 10 on an old machine.
Installed a new copy of Google Chrome
Installed a fresh copy of Node.js
Ran the code I posted above and opened Chrome
This means that nothing I had on my machine before was preset. There were no extensions in Chrome now, as it's a fresh install on a fresh Windows 10 install.
The file size that Chrome is showing with gzip is 87.9 kb.
Closing stale issue.
|
2025-04-01T06:38:37.887167
| 2014-07-24T01:27:09
|
38590049
|
{
"authors": [
"ChiperSoft",
"JessieAMorris",
"affanshahid",
"ahmetatar",
"arcanis",
"calebmer",
"dougwilson",
"felixfbecker",
"gtomitsuka",
"joepie91",
"jonathanong",
"listepo",
"mikemaccana",
"olalonde",
"q42jaap",
"wesleytodd",
"wmertens",
"xjamundx"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:5772",
"repo": "expressjs/express",
"url": "https://github.com/expressjs/express/issues/2259"
}
|
gharchive/issue
|
promises
now that promises are going mainstream, i'm trying to think of how to make express more async friendly. an idea is to use promises.
next() now returns a promise
if middleware returns a promise, that promise is resolved and propagated up next()s
app.use(function (req, res, next) {
// a promise must be returned,
// otherwise the function will be assumed to be synchronous
return User.get(req.session.userid).then(function (user) {
req.user = user
})
.then(next) // execute all downstream middleware
.then(function () {
// send a response after all downstream middleware have executed
// this is equivalent to koa's "upstream"
res.send(user)
})
})
Error handlers are now more koa-like:
app.use(function (req, res, next) {
return next().catch(function (err) {
// a custom error handler
if (res.headerSent) return
res.statusCode = err.status || 500
res.send(err.message)
})
})
app.use(function (req, res) {
throw new Error('hahahah')
})
Pros:
it should be backwards compatible since you don't have to resolve the promise returned from next()
much easier error handling including throwing
solves issues shown in https://github.com/visionmedia/express/issues/2255
no more fn.length checking ~_~
could probably easily upgrade to es7 async functions
Cons:
promises
upgrading middleware and supporting both signatures might be a pain in the ass
probably a lot slower
Looks promising! (sorry)
Just as a data point, here is a framework that uses a WSGI-like setup. Basically, you make a chain of middlewares that optionally modify the request before passing it down the chain and then optionally modify the response before passing it up the chain.
The response is a simple object with the headers and an iterable body, and once the entire chain ran it is sent to the user. You can't have "headers already sent" errors, and error handling is a lot cleaner.
The interesting bit is that middleware can return promises for the header or body parts, and they are sent as soon as they resolve. This makes for very simple code.
I wonder if Express can be made to do the same while retaining the res API for middleware.
Why not giving users the ability to set our own trigger handler?
Something like this would be easy to implement, and wouldn't hit the performance (and could have other interesting usages, such as intercepting the routing and monitoring action perfs):
app.wrap(function (action, req, res, next) {
var res = action(req, res, next);
if (res && res.then) {
res.then(function () {
next();
}, function () {
res.status(500);
});
}
})
It would be a simple feature which would allow us to start using promise, until you decide if/how you want to support promises in the Express core. Would you consider a PR?
This issue is pretty old. Promise support will be coming in Express 5.
Really? It has not been listed on the related issue. Are they already supported, or just planned?
@arcanis sorry, I didn't add it. Promise support should be listed in the 5.0 issue now.
@arcanis I must admit I don't understand how your code works (what does app.wrap do?)
I use promises with Express by wrapping handlers (easy in coffeescript):
Q = require 'q'
class HTTPError extends Error
name: "HTTPError"
constructor: (status, message) ->
if +status >= 100
@status = status
else
@status = 500
if status
message = status
@message = message
class NotFoundHTTPError extends HTTPError
name: "NotFoundHTTPError"
constructor: (message) -> super 404, message
httpErrorHandler = (err, req, res, next) ->
if err instanceof HTTPError
res.send err.status, err.message
else
next err
promiseJson = (fn) ->
(req, res) -> res.json Q.fcall fn, req
app.use (require 'express-promise')()
app.use httpErrorHandler
app.get '/foo/', promiseJson (req) ->
if idontwanttoanswer
throw new NotFoundHTTPError "these are not the droids you're looking for"
# some code that returns a value or a promise
...
@wmertens It's only a proposal. It would define how should the middlewares be called by the router. The default one would just pass the parameters to the actual middleware functions, but a more sophisticated one could just as well expect the middleware to return a promise, and 'convert' that promise to a callback.
But it's not so important if core promise support are on their way. Just something that I think could be interesting.
In case anybody ends up on this thread looking to use promises in Express right now, I've written an article here on exactly that, using express-promise-router.
Might be a useful stop-gap solution for some people, even if it doesn't completely implement them as described here :)
+1
i.m.o we can't really have it until joyent/node#7714 is fixed (or at least, promised to be fixed).
Why not Bluebird?
https://www.npmjs.com/package/native-or-bluebird
Any update on this?
And btw since no one mentioned it, the .catch() error handler style could be achieved with next() callback style too in theory, by allowing to pass a callback to next().
I've been trying to get this merged into pillarjs/router (see
Any updates on this? Would be nice if middleware could return a promise as an alternative to calling next()/next(err).
Can I take care of it if nobody is working on it?
There are a few PRs both here and in https://github.com/pillarjs/router . I plan to merge in basic support to 5.0 over the weekend, likely https://github.com/pillarjs/router/pull/32 without the upstream support (for now), since upstream has a lot more kinks to work out.
In the mean time, for express 4, we have monkey patched Layer to wrap handlers with code that does exactly what we want:
https://gist.github.com/q42jaap/f2fb93d96fda6384d3e3fc51977dec90
We have been using https://www.npmjs.com/package/async-middleware for quite a while now, it just wraps middlewares explicitely without any monkey patching
Hoping to add something like this into kraken-js https://github.com/krakenjs/kraken-js/issues/495
Any word on this? There's a ton of middlewares and such, but this would be nice to get in mainline express.
https://github.com/pillarjs/router/pull/60
It is lined up for the 2.x version of router, which will land in express 5.
Any updates on when promise support will land in Express 5?
The basic support is merged to the router 2.x branch. https://github.com/pillarjs/router/pull/60
There is one other open PR over there, but nce the beta for that is released I think we can release another prerelease version of express 5. I am not sure if @dougwilson has a concrete timeline on that.
I'm going to close this issue now that Express.js 5.0.0-alpha.7 has been published which includes the initial support for Promises in the router. Middleware and handlers can now return promises and if the promise is rejected, next(err) will be called with err being the value of the rejection. The implementation is seeking feedback from real usage, and please open any feedback as a new issue, either in this issue tracker or in the router issue tracker.
I am currently working on writing up Express.js-specific documentation on this feature, but in the meantime, the documentation can be found in the router repository:
https://github.com/pillarjs/router/tree/v2.0.0-alpha.1#middleware
The function can optionally return a Promise object. If a Promise object is returned from the function, the router will attach an onRejected callback using .then. If the promise is rejected, next will be called with the rejected value, or an error if the value is falsy.
Is there a single example of a route using Promises? I've read the router changelogs etc, from alpha 2 to alpha 7, but I can't find anything.
I know, eg, https://arc.codes uses
exports.handler = async function http(request) {
return {
status: 201,
type: 'text/html; charset=utf8',
body: `
<!doctype html>
<html>
<body>hello world</body>
</html>
`
}
}
Does express 5 allow me to return a response from a route?
Asked a question about generally getting rid of callbacks here (so we can just return a response) but deleted it - filed it as a new issue #3884 instead.
|
2025-04-01T06:38:37.914786
| 2024-03-23T14:35:35
|
2203905446
|
{
"authors": [
"bhelx"
],
"license": "BSD-3-Clause",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:5773",
"repo": "extism/js-pdk",
"url": "https://github.com/extism/js-pdk/issues/58"
}
|
gharchive/issue
|
Regression bug when exception is thrown
There seems to be a regression around the core engine handling exceptions. Given this test program:
function greet() {
throw new Error('hello')
}
module.exports = { greet }
Output is:
2024/03/23 09:33:11 No runtime detected
2024/03/23 09:33:11 Calling function : greet
Error: wasm error: unreachable
wasm stack trace:
.$1151(i32,i32,i32,i32,i32,i32)
.$1290(i32,i32)
.$1309(i32,i32,i32,i32,i32)
.$1357(i32) i32
.$1374() i32
We should see the exception
Should be fixed in #57
|
2025-04-01T06:38:37.921226
| 2016-12-16T13:40:14
|
196061485
|
{
"authors": [
"Ilphrin",
"extr0py"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:5774",
"repo": "extr0py/oni",
"url": "https://github.com/extr0py/oni/issues/106"
}
|
gharchive/issue
|
[NERDTree] "Press ? for help" appear in the wrong buffer
Hi! First, let me thank you a lot for your awesome GUI, even if it is still in development/beta, it looks really cool (I tried first Nyaovim, and was a bit disappointed x) ).
Now, when I open neovim, I have automatically NERDTree opened, but there is a little graphic issue when using Oni:
When I write on the buffer, it replaces these letters, so it really looks like a graphical issue as it is not in the buffer. I have updated to the latest npm package for oni, and for neovim and NERDTree too
(By the way, I am on Linux Mint 18)
Here is my .vimrc if you need https://github.com/Ilphrin/.vim/blob/master/vimrc
Hi @Ilphrin ! Thanks for the kind words!
Yes, this definitely looks like a rendering bug. I believe I hit this on Windows too on startup - it seems like the beginning of the first line is always problematic. It might be we're missing or not handling one of the neovim msgpack-RPC actions correctly.
When i'll have time, i'll try to give some help on Oni, this really deserves more hands =D
BTW, when i run Ctrl+L it is refreshing the page and the glitch doesn't appear anymore, as my NERDTree is has a plugin to be launched in every tab and on startup, maybe it is because the browsers draws too soon the content? (Just suppositions, I don't know a thing about what's is happening!)
Awesome, would be great to have the help! :)
I just checked out this issue... It looks like it was a bug that was caused by a couple of contributing factors:
On startup, we would tell Neovim that we had a fixed size screen (80 cols x 40 rows), and then we would always resize to the proper size based on the font afterwards
During resize, Neovim sends a CLEAR action via the msgpack-rpc API. For that action, the cursor position wasn't being reset - so after clearing, it would just start rendering the first line whereever the cursor had been previously. So that was the root problem, and easy to fix.
Should be addressed now by PR #115
|
2025-04-01T06:38:37.954717
| 2016-10-06T13:59:31
|
181422749
|
{
"authors": [
"lucasmezencio",
"pedrommone"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:5776",
"repo": "ezdeliveryco/snorlax",
"url": "https://github.com/ezdeliveryco/snorlax/issues/44"
}
|
gharchive/issue
|
Set milestones and goals
ping @lucasmezencio and @pedrochaves
Which type of milestones/goals do you think we can have?
I think this issue must be discussed internally.
|
2025-04-01T06:38:37.960203
| 2016-03-16T12:08:08
|
141254054
|
{
"authors": [
"egroups",
"ezequieljuliano",
"sglienke",
"talpa"
],
"license": "apache-2.0",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:5777",
"repo": "ezequieljuliano/DelphiLaboratory",
"url": "https://github.com/ezequieljuliano/DelphiLaboratory/issues/1"
}
|
gharchive/issue
|
[dcc32 Fatal Error] F2084 Internal Error: DBG3198
Hi, I tried build SpringAndMVC and I have this error final.Why?:)
Ales
You have installed and configured the latest versions of DMVC and Spring4D (release 1.2)?
Has this link Embarcadero that can help you: http://docwiki.embarcadero.com/RADStudio/Seattle/en/F2084_Internal_Error_-%25s%25d(Delphi)
Yes I have,I have the same problem in other project with TSession,
I have same problem with today Spring release/1.2 without DMVC.In Debug mode I get this error,in Release mode compiled ok.Delphi XE6
This is an error in the compiler (reported as https://quality.embarcadero.com/browse/RSP-14974)
The circumstances to cause this are listed in my comment on that issue.
If you follow my recommendation to build Spring4D and then only point to the dcu directory instead of adding the source directories to the library or search path it will not appear as far as I know.
However I removed the inline from the TCollections.Create* methods as they did not have much of a beneficial effect anyway so this error should be gone once you update to the latest commit in release/1.2
Thank you for response Stefan. Very enlightening.
|
2025-04-01T06:38:38.050906
| 2020-09-26T16:11:36
|
709564859
|
{
"authors": [
"abarisani",
"kenbell",
"prusnak"
],
"license": "BSD-3-Clause",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:5778",
"repo": "f-secure-foundry/tamago",
"url": "https://github.com/f-secure-foundry/tamago/pull/14"
}
|
gharchive/pull-request
|
Initial Pi VideoCore and DMA support
DMA is just memory to memory transfers for now
VideoCore mailbox is functional
Very cool! Here's the output of the functions for boards I have lying around. It seems this "interrogation" can indeed be used to detect the available RAM like suggested in https://github.com/f-secure-foundry/tamago/pull/13#issuecomment-699247428.
However, other values, such a BoardModel/MACAddress/Serial/CPUAvailableDMAChannels look fishy.
Raspberry Pi 1 A
FirmwareRevision: 8 (dec)
BoardModel: 0
MACAddress: 0xb827ebc95629
Serial: 0
CPUMemory: 0x0, 0xc000000
GPUMemory: 0xc000000, 0x4000000
CPUAvailableDMAChannels: 0x7f35
Raspberry Pi 1 B
FirmwareRevision: 13 (dec)
BoardModel: 0
MACAddress: 0xb827ebf4f296
Serial: 0
CPUMemory: 0x0, 0x1c000000
GPUMemory: 0x1c000000, 0x4000000
CPUAvailableDMAChannels: 0x7f35
Raspberry Pi 1 B+
FirmwareRevision: 0x10
BoardModel: 0
MACAddress: 0xb827eb32d0cc
Serial: 0
CPUMemory: 0x0, 0x1c000000
GPUMemory: 0x1c000000, 0x4000000
CPUAvailableDMAChannels: 0x7f35
Heya, any thoughts on my change request?
Hey - the comments look good. Got distracted with another project. I'll work on making the changes.
I've pushed a change that I think addresses the review comments.
|
2025-04-01T06:38:38.097636
| 2017-10-04T07:10:39
|
262687787
|
{
"authors": [
"jarifibrahim",
"rgarg1"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:5779",
"repo": "fabric8-ui/fabric8-planner",
"url": "https://github.com/fabric8-ui/fabric8-planner/issues/2207"
}
|
gharchive/issue
|
Work item types vanish from work-item-quick-add list
To reproduce click on the empty space (next to labels) of any work item.
Good one @jarifibrahim
Tracking via https://openshift.io/openshiftio/openshiftio/plan/detail/1549
|
2025-04-01T06:38:38.110882
| 2017-09-16T09:01:36
|
258219436
|
{
"authors": [
"jsight",
"nicolaferraro",
"stevef1uk"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:5780",
"repo": "fabric8io/fabric8-maven-plugin",
"url": "https://github.com/fabric8io/fabric8-maven-plugin/issues/1053"
}
|
gharchive/issue
|
S2I not yet supported for the webapp-generator - for artefacts created from swagger-codegen
I decided to use swagger-codegen on a swagger.json I created
java -jar modules/swagger-codegen-cli/target/swagger-codegen-cli.jar generate -DdebugModels=true -i audit/swagger.json -l jaxrs -o samples/server/ob/jaxrs-spec
then used mvn io.fabric8:fabric8-maven-plugin:3.5.28:setup
then fabric8:run gave me the error:
[ERROR] Failed to execute goal io.fabric8:fabric8-maven-plugin:3.5.28:build (fmp) on project swagger-jaxrs-server: Execution fmp of goal io.fabric8:fabric8-maven-plugin:3.5.28:build failed: S2I not yet supported for the webapp-generator. Use -Dfabric8.mode=kubernetes or -Dfabric8.buildStrategy=docker for OpenShift mode. Please refer to the reference manual at https://maven.fabric8.io for details about build modes. -> [Help 1]
I seem to remember hitting this problem a few years ago and got around it by hacking on the pom file but it would nice not to have to
Possibly related PR: https://github.com/fabric8io/fabric8-maven-plugin/pull/1060
@stevef1uk does it work with mvn fabric8:run -Dfabric8.mode=kubernetes?
|
2025-04-01T06:38:38.113115
| 2017-04-06T13:43:57
|
219903438
|
{
"authors": [
"michaelkleinhenz",
"nimishamukherjee",
"vikram-raj"
],
"license": "apache-2.0",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:5781",
"repo": "fabric8io/fabric8-planner",
"url": "https://github.com/fabric8io/fabric8-planner/pull/1534"
}
|
gharchive/pull-request
|
fix(iteration): show date picker in create interation modal (fix #1341)
Previously we didn't allow the user to choose iteration date interval while creating the new iteration. Now this PR will allow the user to select the iteration date interval while creating the new iteration.
The date pickers should be empty on new iterations, not filled with a default date. Is that the case with this change? If no, can you add that? Thanks!
@michaelkleinhenz No, now date picker filled with a current date. and yes, I am adding that.
Current date/default does not show up when creating a new iteration:
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.