added
stringdate 2025-04-01 04:05:38
2025-04-01 07:14:06
| created
timestamp[us]date 2001-10-09 16:19:16
2025-01-01 03:51:31
| id
stringlengths 4
10
| metadata
dict | source
stringclasses 2
values | text
stringlengths 0
1.61M
|
|---|---|---|---|---|---|
2025-04-01T06:40:55.649685
| 2016-10-05T11:38:10
|
181129782
|
{
"authors": [
"danielbachhuber",
"pioneerskies"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:11897",
"repo": "welaika/wp-cli-db2utf8",
"url": "https://github.com/welaika/wp-cli-db2utf8/issues/2"
}
|
gharchive/issue
|
Consider adding functional tests to this project
Functional tests are an integral ingredient of highly-quality, maintainable commands. WP-CLI tries to make it as easy as possible to add functional tests to your package with its wp scaffold package-tests command:
https://github.com/wp-cli/scaffold-package-command#wp-scaffold-package-tests
I'd encourage you to consider adding functional tests to your package :) By starting your functional tests early on, it also makes it much easier to maintain your project over time.
We love test and I'll consider to learn a bit of behat as soon as I can.
@danielbachhuber ever seen those warnings:
0.22s$ composer validate --strict
You are running composer with xdebug enabled. This has a major impact on runtime performance. See https://getcomposer.org/xdebug
./composer.json is valid, but with a few warnings
See https://getcomposer.org/doc/04-schema.md for details on the schema
require.wp-cli/wp-cli : unbound version constraints (>=0.23.0) should be avoided
The command "composer validate --strict" failed and exited with 1 during .
Your build has been stopped.
??
Yep, see history of https://github.com/wp-cli/scaffold-package-command/pull/56
I have one last question. I've gitignored following
composer.lock
composer.phar
installer
just because all seems to work without them. But i can't find what the best practice is, said that the scaffolded .gitignore was not ignoring them. Where could I find documentation about what and why check into the repo?
But i can't find what the best practice is, said that the scaffolded .gitignore was not ignoring them. Where could I find documentation about what and why check into the repo?
There isn't documentation about these specifically.
composer.lock is something you should commit to your project, because it locks new installations of your project to specific dependency versions.
composer.phar and installer aren't necessary in the installation step because composer is pre-installed on Travis, CircleCI, and other CI systems. I've created an issue to remove it https://github.com/wp-cli/scaffold-package-command/issues/59
Thanks for help and for all the fish :)
|
2025-04-01T06:40:55.652205
| 2017-12-20T01:32:00
|
283422047
|
{
"authors": [
"TApplencourt",
"welchbj"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:11898",
"repo": "welchbj/tt",
"url": "https://github.com/welchbj/tt/pull/2"
}
|
gharchive/pull-request
|
Refractor fill function (Ok)
Sorry for the last pull request, Travis CI wasn't working and me neither : ).
Anyways, the new fill function should be marginally quicker and arguably more maintainable.
Don't hesitate if you have any remarks or question about the change,
I'm glad you like it
Thanks for submitting this PR! This is actually the first PR to be merged into the library. Expect to see a link to your GitHub profile on the special thanks page when I wrap up the 0.6.4 release.
Please feel free to submit additional PRs if you see any other areas of the codebase that could use refactoring. If you'd like to open up any design discussion or ideas for features, please feel free to open an issue.
Thanks for your time and effort!
|
2025-04-01T06:40:55.658262
| 2021-11-22T19:30:00
|
1060518411
|
{
"authors": [
"clemens-",
"weliem"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:11899",
"repo": "weliem/blessed-android-coroutines",
"url": "https://github.com/weliem/blessed-android-coroutines/issues/9"
}
|
gharchive/issue
|
Unreliable notifications
Hi
I have a device using a nRF52 chip that sends a notification immediately after the notification is enabled to initially populate some fields in my Android app. I noted that this is not reliable and could narrow it down to something happening on Android, as I control the nRF52 and could verify that the notification is indeed sent. There is no log entry, it just does not happen. I have 4 characteristics, but get notifications for any number between 0 and 4 when I initially connect. Later, the notifications do work.
What I tried so far:
Delaying the observe() call all together or individually spaced for up to 2 seconds in between calls
Delaying the response from the nRF52 in the same manner
My code to set up notifications look like that:
private suspend fun setupAmountNotification(peripheral: BluetoothPeripheral) { peripheral.getCharacteristic( UUID.fromString(SERVICE_UUID), UUID.fromString(CHAR_AMOUNT_UUID) )?.let { peripheral.observe(it) {value -> Log.i(TAG, "Notification for amount") val parser = BluetoothBytesParser(value, ByteOrder.LITTLE_ENDIAN) runOnUiThread{ onAmountNotification(parser.getIntValue(FORMAT_UINT32)) } } } }
In turn, I get this log (the other observe functions are analogous):
D/MainActivity: Peripheral eo-4095 has CONNECTED D/BluetoothGatt: setCharacteristicNotification() - uuid: e44e1403-14b3-457c-xxxx-34bf34932966 enable: true D/BluetoothGatt: setCharacteristicNotification() - uuid: e44e1402-14b3-457c-xxxx-34bf34932966 enable: true D/BluetoothGatt: setCharacteristicNotification() - uuid: e44e1401-14b3-457c-xxxx-34bf34932966 enable: true D/BluetoothGatt: setCharacteristicNotification() - uuid: e44e1404-14b3-457c-xxxx-34bf34932966 enable: true I/MainActivity: Notification for reservoir I/MainActivity: Notification for status
Are you just missing the first notification you are expecting? And do later ones arrive normally?
If so, I might have an idea what it could be. I store the lambda for the callback after the enabling of the notification succeeded. So the is indeed a race condition if the first notification comes in immediately because the coroutine runs on a different thread.
Yes, I am just missing the first one, later ones do arrive normally. So your assumption could be correct. However, I tried to delay the response of the nRF52, but this did not really result in a different behaviour. And by delay I mean values between 10 an 1000 ms. Does it really take that long to store the lambda?
I worked around it by first reading the values manually and then wait for the notification for all subsequent values. It does work, but is less elegant than what I intended.
Is there anything I can try to make it work? It is not really a breaking bug though.
Ok, let me try a fix tomorrow....
I released a new version (0.1.2) with a possible fix. Can you try it?
Will do tonight!
Works like a charm! Thank you so much for your effort and work. And quick too!
Works like a charm! Thank you so much for your effort and work. And quick too!
|
2025-04-01T06:40:55.662552
| 2022-02-28T09:13:41
|
1153839358
|
{
"authors": [
"DominiqueMarshall",
"cbowskill"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:11900",
"repo": "wellcomecollection/wellcomecollection.org",
"url": "https://github.com/wellcomecollection/wellcomecollection.org/issues/7726"
}
|
gharchive/issue
|
Removal of ticketing on requesting
What is it and who's it for?
As we remove the ability to book a ticket for the building, we also need to remove this functionality from the new requesting flows
Implementation
Remove reference to booking a ticket from within the requesting flows (remove text and CTA)
@DominiqueMarshall - do you have a visual for how this confirmation screen should look without the reference to ticketing?
|
2025-04-01T06:40:55.667008
| 2023-07-06T10:36:41
|
1791284537
|
{
"authors": [
"gestchild",
"rcantin-w"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:11901",
"repo": "wellcomecollection/wellcomecollection.org",
"url": "https://github.com/wellcomecollection/wellcomecollection.org/pull/10019"
}
|
gharchive/pull-request
|
Turn .container into a utility component
Who is this for?
Devs/maintenance
What is it doing for them?
One step closer to getting rid of utility classes, see #10018 for more details
I had to add isContainer to Space as Container and Space are both styled components and I couldn't find another way to merge them - if we hate it, let's discuss.
Something is up with styled-components and type declaration where it ruins the rendering of the rest of the file (see screenshots). If we move that type declaration in its own right, it renders it properly. Extra faff, but so much more readable.
I don't love the isContainer
Same, I really don't like it!
I just realised there might be a simpler solution, it looks the same to me as prod does, so idk if I was just too close to it when I originally made the PR? Do confirm if I'm missing something?
Ha, that would work. It did actually cross my mind when I first looked at it and then I got caught up in trying to find a way to combine styled components and forgot.
Think that's what happened to me too 😅 sometimes we just have to walk away and come back. I'll merge on Monday 👍
|
2025-04-01T06:40:55.686738
| 2021-04-26T15:04:03
|
867804888
|
{
"authors": [
"aCampello",
"codecov-commenter"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:11902",
"repo": "wellcometrust/WellcomeML",
"url": "https://github.com/wellcometrust/WellcomeML/pull/278"
}
|
gharchive/pull-request
|
Release 1.1.0
Description
Release 1.1.0
Checklist
[x] Change wellcomeml/__version__.py
[x] Add changelog
[ ] make dist
[ ] Verify new package was generated correctly on the pip registry
and GitHub releases
Codecov Report
Merging #278 (28bc28b) into main (e81be34) will decrease coverage by 58.77%.
The diff coverage is n/a.
:exclamation: Current head 28bc28b differs from pull request most recent head b4e2f0f. Consider uploading reports for the commit b4e2f0f to get more accurate results
@@ Coverage Diff @@
## main #278 +/- ##
===========================================
- Coverage 86.10% 27.33% -58.78%
===========================================
Files 41 41
Lines 2296 2290 -6
===========================================
- Hits 1977 626 -1351
- Misses 319 1664 +1345
Impacted Files
Coverage Δ
wellcomeml/ml/clustering.py
17.82% <ø> (-72.28%)
:arrow_down:
wellcomeml/metrics/ner_classification_report.py
8.69% <0.00%> (-91.31%)
:arrow_down:
wellcomeml/datasets/conll.py
12.50% <0.00%> (-87.50%)
:arrow_down:
wellcomeml/ml/cnn.py
12.24% <0.00%> (-78.58%)
:arrow_down:
wellcomeml/io/s3_policy_data.py
14.49% <0.00%> (-76.82%)
:arrow_down:
wellcomeml/ml/bilstm.py
15.60% <0.00%> (-75.89%)
:arrow_down:
wellcomeml/spacy/spacy_doc_to_prodigy.py
25.00% <0.00%> (-75.00%)
:arrow_down:
wellcomeml/ml/spacy_ner.py
20.00% <0.00%> (-73.85%)
:arrow_down:
wellcomeml/datasets/winer.py
8.27% <0.00%> (-73.80%)
:arrow_down:
... and 20 more
Continue to review full report at Codecov.
Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update e81be34...b4e2f0f. Read the comment docs.
Will run make dist when approved.
Release new version WellcomeML
Starting to release...
Uploaded correctly. We're on 1.1.0 on pipit
And on GitHub. 🎉
|
2025-04-01T06:40:55.688487
| 2018-12-06T17:05:03
|
388309148
|
{
"authors": [
"alexwlchan"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:11903",
"repo": "wellcometrust/platform",
"url": "https://github.com/wellcometrust/platform/pull/3144"
}
|
gharchive/pull-request
|
Create a topic for edits to the Miro VHS data
The editing script then forwards the new item to the topic, so the catalogue transformer gets it immediately.
In theory the reporting pipeline could subscribe to it as well.
It doesn't cover edits of the form “toggle the isClearedForCatalogueAPI” parameter, but we can easily add that later.
@kenoir All good now?
|
2025-04-01T06:40:55.701888
| 2024-06-13T11:10:15
|
2350863862
|
{
"authors": [
"stalkerGH",
"welpo"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:11904",
"repo": "welpo/tabi",
"url": "https://github.com/welpo/tabi/issues/329"
}
|
gharchive/issue
|
Add language support for search engine
I'm trying to add search engine to my website. Unfortunately, it seems that my language is not supported by Zola & tabi:
Error: Failed to serve the site
Error: Tried to build search index for language pl which is not supported
Or maybe I should ask Elasticlunr developers...?
I believe Zola uses elasticlunr-rs. Related discussion: https://github.com/mattico/elasticlunr-rs/issues/13
If elasticlunr supports it and Zola is updated to use the last elasticlunr, tabi will work with its search index (with no changes).
OK, I don't see support for my language. I have to investigate it.
|
2025-04-01T06:40:55.743242
| 2021-05-25T16:27:34
|
901060195
|
{
"authors": [
"pengzhendong",
"rohithkodali",
"temp1096"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:11905",
"repo": "wenet-e2e/wenet",
"url": "https://github.com/wenet-e2e/wenet/issues/407"
}
|
gharchive/issue
|
Gigaspeech model broken link
The pretrained conformer model link provided for the Gigaspeech example doesn't work, says "file not exist"
Model link: http://mobvoi-speech-public.ufile.ucloud.cn/public/wenet/gigaspeech/20210115_conformer_exp.tar.gz
try http://mobvoi-speech-public.ufile.ucloud.cn/public/wenet/gigaspeech/20210520_conformer_exp.tar.gz
This works, thank you! Would you like to also push that updated link to the repo?
@pengzhendong I'm trying to load the model from that link, just calling "torch.load()" and giving it the path to the final.pt file, but I get errors - I've attached the stack trace below. Is there an issue with this model file, or is there another way I should be calling it? Thanks!
File "wenet/bin/export_jit.py", line 43, in <module>
load_checkpoint(model, args.checkpoint)
File "/Users/ark/Documents/projects/wenet/wenet/utils/checkpoint.py", line 18, in load_checkpoint
checkpoint = torch.load(path, map_location='cpu')
File "/opt/miniconda3/envs/wenet/lib/python3.8/site-packages/torch/serialization.py", line 577, in load
with _open_zipfile_reader(opened_file) as opened_zipfile:
File "/opt/miniconda3/envs/wenet/lib/python3.8/site-packages/torch/serialization.py", line 241, in __init__
super(_open_zipfile_reader, self).__init__(torch._C.PyTorchFileReader(name_or_buffer))
RuntimeError: [enforce fail at inline_container.cc:144] . PytorchStreamReader failed reading zip archive: failed finding central directory
It has been updated.
Please change your pytorch version to 1.6.
Got it, thanks! For anybody who stumbles upon this in the future, I was actually using torch 1.6, but on a machine without a GPU - with a GPU it worked fine, even on torch 1.8.
Hello again! I just have another question trying to get the model working. The steps I followed were:
Download and untar the file provided at the download link
Export the model, with: python wenet/bin/export_jit.py --config 20210520_conformer_exp/train.yaml --checkpoint 20210520_conformer_exp/final.pt --output_file final.zip, this runs successfully
Try to recognize on the librispeech test-clean test set (since I don't have access to download gigaspeech yet):
python wenet/bin/recognize.py --gpu 0 --mode attention_rescoring --config 20210520_conformer_exp/train.yaml --checkpoint 20210520_conformer_exp/final.pt --test_data examples/librispeech/s0/data/test_clean/format.data --beam_size 20 --batch_size 1 --penalty 0.0 --dict 20210520_conformer_exp/words.txt
This fails, I believe because words.txt contains log probabilities of the tokens instead of the indices. I tried replacing the log probabilities with indices, but it seems like the words need to be correctly sorted, and use the correct special tokens as well. I'm able to decode if I use integer indices in words.txt, but the results are nonsensical, presumably because the indices don't correspond correctly to the proper tokens.
Is there another words.txt file that I should be using instead? Thanks again for the help!
Hello again @pengzhendong - just wanted to ask the above question again, if you have a chance to take a look.
The same issue for me as well, there are log probabilities instead of integer indices and I tried to convert them into indices but the output is worse.
Update for others - the same link now seems to point to an updated word list, that I was able to get running this time.
|
2025-04-01T06:40:55.764256
| 2024-11-07T10:56:37
|
2640655223
|
{
"authors": [
"Udaberrico",
"azgooon"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:11906",
"repo": "weslinkde/laravel-postgres-tools",
"url": "https://github.com/weslinkde/laravel-postgres-tools/pull/6"
}
|
gharchive/pull-request
|
Add Local Disk Check and Temporary Snapshot Copy for pg_restore
This PR enhances the snapshot loading process by verifying if the disk from which the snapshot will be loaded is "local." If the disk is not local, the snapshot is first copied to a predefined temporary directory to enable access. The snapshot is then loaded using pg_restore, and the local copy is deleted after the process completes.
Additionally, the --if-exists flag is added to the pg_restore command to prevent errors in cases where certain database objects already exist, improving robustness and ensuring smoother restore operations.
Thanks for your contribution, and sorry for the long approval time.
|
2025-04-01T06:40:55.822996
| 2024-04-03T08:05:46
|
2222213799
|
{
"authors": [
"big-dust",
"gerayking",
"terry-xuan-gao"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:11907",
"repo": "wesql/wescale",
"url": "https://github.com/wesql/wescale/issues/472"
}
|
gharchive/issue
|
[Enhancement Request] Fine-Grained GTID Support for Improved Read-After-Write Performance
Background
The current implementation of the read_after_write consistency feature in the system relies on waiting for the execution of the last global transaction identifier (GTID), indiscriminately applying this method across SQL operations regardless of their data dependencies. This broad-stroke approach leads to unnecessarily high latency and decreased throughput for read-after-write operations, particularly when these operations do not interact with the same table. The lack of differentiation significantly hinders performance, especially in use cases where operations could otherwise proceed in parallel without data consistency issues.
Proposal
Implement Table-Level Read-After-Write Support: Introduce the capability for the system to intelligently discern operations across different tables, allowing for parallel processing of read-after-write operations where there are no direct data dependencies. This refinement is anticipated to substantially lower wait times for operations not confined to the same table, enhancing responsiveness.
Provide Configuration Options for Global and Table Levels: Offer users the ability to adjust read-after-write settings specifically for global and table levels. This granularity in configuration would empower users to tailor performance optimization strategies more precisely to their application's operational characteristics and requirements.
Performance Analysis for Global and Table Level Settings: Undertake a comprehensive analysis to evaluate the performance implications of utilizing global versus table-level settings for read-after-write operations. The insights gained from this analysis would equip users with the knowledge to make informed decisions, optimizing their configurations for either broader or more targeted performance improvements based on their specific scenarios.
Proposal For Read-After-Write Performance Improvement
Introduction
Hi, I find this project very cool and want to be a contributor, I write this proposal based on the ReadAfterWrite Consistency Document, and the changes I make are marked in Bold or Delete Line.
This is only a preliminary version, and I hope that I can fully discuss the optimization logic with community members before designing the code implementation, looking forward to your reply : )
Goals
Session Level ReadAfterWrite: Ensure read requests get latest write in the same client connection.
Instance Level ReadAfterWrite: Ensure read requests get latest write in the WeSQL WeScale Instance.
Implement Table-Level Read-After-Write Support.
Design Details
Step 1: Get GTID after write operation without extra network round
Starting from MySQL 5.7, the MySQL protocol implements a mechanism to collect the GTIDs to be sent over the wire in the response packet. This feature assists us in acquiring GTIDs without introducing further network rounds.
To enable the feature:
Client needs to set Capability flag CLIENT_SESSION_TRACK when connecting to MySQL via mysql protocol. This will enable MySQL to send the tracking information back to the client.
Client also needs to issue SET @@SESSION_TRACK_GTIDS = 'OWN_GTID' to indicate MySQL to return GTID in the OK packet. This system variable tracks the last DML and DDL commit GTID.
Step 2: Manage the latest GTID and update time for each table in the last t senonds
We can use struct LatestGTIDManager to manage the latest GTID and update time for each table.
The code below is just used to illustrate the method:
// LatestGTIDEntry represents an entry in the LatestGTIDManager with the table name, GTID, and the time it was updated.
type LatestGTIDEntry struct {
GTID string
UpdateTime time.Time
}
// LatestGTIDManager manages the latest GTID and update time for each table.
type LatestGTIDManager struct {
latestGTIDs map[string]LatestGTIDEntry // Key is the table name, value is the LatestGTIDEntry struct.
expireTime time.Duration // The expiration time for GTID entries.
mu sync.RWMutex // Mutex for read-write synchronization.
wg sync.WaitGroup // WaitGroup to wait for the cleanup goroutine to finish.
}
// NewLatestGTIDManager creates a new instance of LatestGTIDManager.
func NewLatestGTIDManager(expireTime time.Duration) *LatestGTIDManager {
return &LatestGTIDManager{
latestGTIDs: make(map[string]LatestGTIDEntry),
expireTime: expireTime,
}
}
// UpdateGTID updates the latest GTID and update time for a given table.
func (m *LatestGTIDManager) UpdateGTID(tableName, gtid string) {
m.mu.Lock()
defer m.mu.Unlock()
m.latestGTIDs[tableName] = LatestGTIDEntry{
GTID: gtid,
UpdateTime: time.Now(),
}
}
// GetLatestGTID retrieves the latest GTID for a given table.
// If the table is not found or the GTID has expired, it returns an empty string and false.
func (m *LatestGTIDManager) GetLatestGTID(tableName string) (string, bool) {
m.mu.RLock()
defer m.mu.RUnlock()
entry, ok := m.latestGTIDs[tableName]
if !ok || time.Now().Sub(entry.UpdateTime) > m.expireTime {
return "", false
}
return entry.GTID, true
}
// startCleaner starts a goroutine to periodically clean up expired GTID entries.
func (m *LatestGTIDManager) startCleaner() {
m.wg.Add(1)
go func() {
defer m.wg.Done()
ticker := time.NewTicker(m.expireTime)
defer ticker.Stop()
for {
select {
case <-ticker.C:
m.mu.Lock()
now := time.Now()
for tableName, entry := range m.latestGTIDs {
if now.Sub(entry.UpdateTime) > m.expireTime {
delete(m.latestGTIDs, tableName)
}
}
m.mu.Unlock()
}
}
}()
}
// Stop waits for the cleanup goroutine to finish.
func (m *LatestGTIDManager) Stop() {
m.wg.Wait()
}
Depends on the consistency level, the LatestGTIDManager may be initialized in the client’s session or a global memory data structure.
// Initialize LatestGTIDManager with an expiration time of 10 minutes.
gm := NewLatestGTIDManager(10 * time.Second)
gm.startCleaner()
Step 3: Store the GTID in WeSQL WeScale sessions
After parsing the response packet and get the GTIDs, WeSQL WeScale will store them in the memory.
If the operation is a write operation, the LatestGTIDManager will update the latest GTID and write time for the table has be written.
gm.UpdateGTID("my_table", "abcdefg-1234567-890")
Depends on the consistency level, the GTIDs may be stored in the client’s Session or a global memory data structure.
When a read operation happens, we will utilize the LatestGTIDManager to get the Latest_GTID_for_Table_to_be_Read.
Two situations will occur at this time:
The table has been updated in the last t seconds, we get its Latest_GTID_for_Table_to_be_Read, and enter the Step 4.
The table has NOT been updated in the last t seconds, the information for this table has been cleaned up by LatestGTIDManager, at this point, in a radical way, since the last write to the table has been at least t seconds, it can be considered that the last write to the table has been completed from every follower, we can just pick a follower and read it.
Later read operations will utilize GTIDs stored in WeSQL WeScale’s memory, to ensure retrieval of data that was previously written. See belowing steps for more details.
Step 4: Select a MySQL follower for reading
A CLUSTER_GTID_EXEUTED memory data structure is matained in WeSQL WeScale’s memory, it contains’s all the @@global.gtid_executed values from the cluster. The CLUSTER_GTID_EXEUTED is updated by the health-check module periodically, and obviously it will be lagging.
Therefore, GTIDs from step1 will update CLUSTER_GTID_EXEUTED constantly.
During routing phase of a read operation, it will use the GTID (from session or global memory data structure) to pick a MySQL instance based on CLUSTER_GTID_EXEUTED.
During routing phase of a read operation, it will use the Latest_GTID_for_Table_to_be_Read (from LatestGTIDManager stored in session or global memory) to pick a MySQL instance based on CLUSTER_GTID_EXEUTED.
As long as the picked MySQL instance containes the Latest_GTID_for_Table_to_be_Read, the read operation can be directly forwarded to the MySQL instance.
Step 5: Ensure write requests have been propagated to the follower MySQL
All the follower MySQL instances may be lagging, or the CLUSTER_GTID_EXEUTED may be out-of-date for whatever reasons. It is possible that no follower (expect leader, it always holds all the data) is available for a read operation in Step 4.
We can either send the read operation to the leader, or send the read operation to the follower with a WAIT_FOR_EXECUTED_GTID_SET prefix. WAIT_FOR_EXECUTED_GTID_SET function will keep waiting until a GTID is executed on the follower or until times out.
We can use multi-statement to save one network round:
-- for example, if user's SQL is: select * from t1;
-- the actual SQL sent to follower may be a multi-statement like this:
select WAIT_FOR_EXECUTED_GTID_SET('ab73d556-cd43-11ed-9608-6967c6ac0b32:7', 3);select * from t1;
We need to handle the mysql protocol carefully to use the multi-statement, otherwise the mysql connection may be broken.
Thank you for your interest in this topic. If you would like to proceed, please feel free to send an email to<EMAIL_ADDRESS>Your understanding is correct, and we can discuss the implementation details further. We should consider the scalability of the implementation because this feature fundamentally analyzes the dependency between two SQL. The basic approach is at the table level, and we will implement a more fine-grained dependency detection.
Cool! I have sent you an email about the idea of dependency detection , I'll take some more time to read the source code carefully and look forward to discussing it with you further!
Hi Terry,
没错,这个项目是目前OSPP的项目,你可以尝试先拉代码跑通wescale,如果想获取更多资料尽可能的了解wescale可以加我的微信:wanttowin399,并且我们考虑使用最新的filter功能来实现这个feature。
祝好
geray
Terry Gao @.***> 于2024年4月30日周二 23:48写道:
Cool! I have sent you an email about the idea of dependency detection ,
I'll take some more time to read the source code carefully and look forward
to discussing it with you further!
—
Reply to this email directly, view it on GitHub
https://github.com/wesql/wescale/issues/472#issuecomment-2085730565, or
unsubscribe
https://github.com/notifications/unsubscribe-auth/ALUJHBQWKMRXGDUYKMO4A63Y764LTAVCNFSM6AAAAABFUZLBO2VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDAOBVG4ZTANJWGU
.
You are receiving this because you authored the thread.Message ID:
@.***>
Hi, I am very interested in this issue. Yesterday, I sent an email outlining some of my thoughts and ideas. I look forward to the opportunity to discuss them with you further.Thank you for your time and consideration!
|
2025-04-01T06:40:55.857081
| 2022-09-08T03:54:24
|
1365487475
|
{
"authors": [
"ashelkovnykov",
"matthew-levan"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:11908",
"repo": "wexpertsystems/seguro",
"url": "https://github.com/wexpertsystems/seguro/pull/1"
}
|
gharchive/pull-request
|
Implementation of Seguro Phase II
Includes:
Seguro Phase II
Unit tests
Integration tests
Benchmark suite
Benchmark results
makefile
Fully commented
Enhanced README
Notes:
A partially-complete version of Seguro Phase II that follows the design outlined in the proposal more closely is available on a different branch in my fork. It is trivial to git cherry-pick or git rebase the two branches together. However, I think the version included in this PR is the one we should go with.
Looks good. Only a few more questions above, then I'll approve and merge. Thank you.
GH isn't showing it well, but I pushed a new version of the final commit. It fixed the typo for "129 fragments or more", fixed inconsistent usage of "additional"/"total" to "remaining", and added a little more explanation to the fragment header.
|
2025-04-01T06:40:55.920128
| 2019-09-18T22:14:02
|
495480162
|
{
"authors": [
"codecov-io",
"jgrund"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:11910",
"repo": "whamcloud/integrated-manager-for-lustre",
"url": "https://github.com/whamcloud/integrated-manager-for-lustre/pull/1211"
}
|
gharchive/pull-request
|
Fixup python cli
The python manager cli currently is expecting a sessionid
for a non-logged in user.
In more recent versions of Django, a sessionid is not provided
until a user has logged in.
In addition, we should update the csrftoken header after logging in to
match the csrftoken value passed back.
Signed-off-by: Joe Grund<EMAIL_ADDRESS>
Codecov Report
Merging #1211 into master will not change coverage.
The diff coverage is n/a.
@@ Coverage Diff @@
## master #1211 +/- ##
=======================================
Coverage 95.39% 95.39%
=======================================
Files 2 2
Lines 152 152
=======================================
Hits 145 145
Misses 7 7
Continue to review full report at Codecov.
Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update a11a63e...7af640c. Read the comment docs.
|
2025-04-01T06:40:55.922178
| 2022-02-21T03:14:48
|
1145267901
|
{
"authors": [
"FlipperPA",
"runningzyp"
],
"license": "BSD-3-Clause",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:11911",
"repo": "wharton/drf-excel",
"url": "https://github.com/wharton/drf-excel/pull/53"
}
|
gharchive/pull-request
|
Rename wrong words
def_excel 😃
HA, excellent catch - thank you! I'll merge this tomorrow, @runningzyp - and don't forget to add yourself to contributors in the README too!
|
2025-04-01T06:40:55.957723
| 2018-02-25T10:09:17
|
300013961
|
{
"authors": [
"jonaskello",
"spion"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:11912",
"repo": "whoeverest/wsrun",
"url": "https://github.com/whoeverest/wsrun/issues/13"
}
|
gharchive/issue
|
Suggestion: Switches to control how missing packages are handled
This is a follow up suggestion for #10.
There could be a set of switches controlling how missing packages are handled:
--error-missing: This is the default and this switch does not need to be specified (and it doesn't even have to exist other than for documentation purposes). If a package is missing it will break with an error without running any of the packages. This is already the default so nothing has to be done (expect maybe add documentation).
--warn-missing: If a package is missing a warning will be printed but all packages that have the script will still be run. This is the same as the existing --exclude-missing (and it could of course keep that name). So it is already done.
--ignore-missing: This is the lerna/oao behaviour. If a package is missing it will be ignored without any warnings. This switch would need to be added.
I'm not generally fond of solving problems by adding more switches but I think most people coming from lerna/oao will want to have the --ignore-missing behaviour to feel at home and be able migrate to wsrun. Existing lerna users have already voiced an opinion towards this in the yarn RFC.
If this would be an accepted approach I could work on a PR for this.
Please don't open new issues. We can continue the discussion in #10
Yes, this suggestion is different than the one in #10 (which is about eliminating the warnings for --exclude-missing). I thought it would be cleaner to have a separate issue, but sure, we can discuss this suggestion in #10 too.
I agree that adding more switches is not good (as I already stated in the original post in this issue). But since #10 has a wont-fix label and there seems to be no agreement, it could be argued that leaving lerna users without any migration option is worse than adding more switches.
Lerna users are provided with a migration option, which warns that the behaviour is not desirable. The entire point of adding the --exclude-missing option is to support migrating lerna users without endorsing lerna's behaviour as acceptable or good.
|
2025-04-01T06:40:55.966714
| 2015-11-06T17:33:18
|
115551790
|
{
"authors": [
"DEGoodmanWilson",
"whoshuu"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:11914",
"repo": "whoshuu/cpr",
"url": "https://github.com/whoshuu/cpr/issues/60"
}
|
gharchive/issue
|
Handle curl_easy_perform() errors
Hello! I'd like to be able to detect with a finer-grained resolution errors that prevent Curl from performing a request. It seems the right place to start is to change session.cpp:350 to capture the return value from curl_easy_perform(), and then perhaps add some error fields to the cpr::Response that is returned.
I'm happy to open a PR with this feature added if there were interest in it; and if so, I'd like to get your opinion on the right way to structure this feature.
This is a great suggestion @DEGoodmanWilson. In my own testing, I've always just set CURL_VERBOSE to 1 to get more information when curl doesn't do what I want it to do.
Now that the library has matured a bit since those days, I can definitely see the value in having a more stable error API.
One approach is to surface curl errors and have the users inspect those errors to figure out what's wrong. I can see how this approach might cause some problems in the future, because I want to leave the frontend of the API as curl-free as possible. Ideally, the design of the interface should allow for the implementation to completely swap out curl for another http framework (looking mostly at Boost.ASIO).
That said, that's a pretty long way off and I think there's enough value here that it's worth doing soon. I think minimally the curl error code could be captured in an integer field in cpr::Response, and potentially the string error from curl_easy_strerror can be captured in cpr::Response as well.
Feel free to throw up a PR and we could take it from there!
Thanks for the quick response! Sounds good. I share your concerns about exposing Curl to the end-user of CPR. Perhaps we could inter-translate into a CPR-owned enum (but continue using the curl-generated message, since that is meant for humans rather than computers anyway?)
What is your feeling on throwing an exception instead of adding the error to cpr::Response? This could be problematic for the aync versions of the HTTP methods. My own inclination is to not throw an exception, but I can see arguments either way.
I would prefer to keep the library as exception free as possible. Clients could always wrap their own exception mechanisms over a simple error code they can check. Conversely, a client could wrap absorb exceptions and write their own error codes against this, but I think the former is a bit nicer to larger groups of people (looking at you, Google Coding Standards).
I think the approach you're suggesting is reasonable, I'll have to take a closer look at #61 tomorrow when I have some free cycles. If that's good to go, I'll close this issue when that gets merged in.
Does that sound good?
:+1: Sounds awesome.
Merged in https://github.com/whoshuu/cpr/commit/bdb877c4ae29423ec45a8cb37125e14c91983da1. Thanks for the PR and work!
|
2025-04-01T06:40:55.983515
| 2018-04-26T06:59:39
|
317900940
|
{
"authors": [
"whr94621",
"zhengzx-nlp"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:11915",
"repo": "whr94621/NJUNMT-pytorch",
"url": "https://github.com/whr94621/NJUNMT-pytorch/pull/16"
}
|
gharchive/pull-request
|
Init adaptation for Pytorch 0.4.0
shared mode does not work yet
The only thing is should we use enable_grad(), set_grad_enabled(), no_grad() as a context-manager.
I think a context-manager is more clear that the gradient mode is only defined in such a block below with statement.
I will merge this PR first. We can complete adaptation afterwards :)
|
2025-04-01T06:40:55.985277
| 2019-06-18T14:49:52
|
457528389
|
{
"authors": [
"cal2195",
"whyboris"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:11916",
"repo": "whyboris/Video-Hub-App",
"url": "https://github.com/whyboris/Video-Hub-App/pull/192"
}
|
gharchive/pull-request
|
version bumps
Nothing major; the biggest two are:
electron-builder
electron
But both are just minor version bumps 👌
npm run electron:mac still works -- 🙆♂
Current Mac dmg file size: 104MB
🎉
|
2025-04-01T06:40:55.988295
| 2016-02-17T20:17:49
|
134388494
|
{
"authors": [
"godspeedelbow",
"whyhankee"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:11917",
"repo": "whyhankee/flw",
"url": "https://github.com/whyhankee/flw/issues/2"
}
|
gharchive/issue
|
Add ES6 Map support to setup a waterfall flow
It's impossible to use an Object to setup a waterfall flow, because the order of the keys is not guaranteed on an Object. However, an ES6 Map is ordered! ("A Map object iterates its elements in insertion order"). Without an Map literal, Map will be unwieldy, but there's hope, but assuming that gets fixed, it could look something like this:
flw.waterfall([ //imaginary Map literal syntax
foo: function getFoo(callback) {
callback(null, 'I pity the foo');
},
bar: function getBar(results, callback) {
console.log(results.foo);
callback();
},
]);
getBar gets two arguments, results and callback. results is a hashmap containing keys for each of the previously finished functions, in this case one key foo which contains the value with which the getFoo callback was called.
A waterfall is just a series that returns values. Now you have a nice context object so a waterfall is not really needed I think ..
I think this would be a whole new project, mapperFall or something :)
|
2025-04-01T06:40:55.996638
| 2021-06-30T00:05:35
|
933191195
|
{
"authors": [
"coveralls",
"lalmei"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:11918",
"repo": "whylabs/whylogs",
"url": "https://github.com/whylabs/whylogs/pull/248"
}
|
gharchive/pull-request
|
WIP: NLP metrics
Description
General Checklist
[ ] Tests added for this feature/bug
if it was a bug, test must cover it.
[ ] Conform by the style guides, by using formatter
[ ] Documentation updated
[ ] (optional) Please add a label to your PR
Pull Request Test Coverage Report for Build 987111448
2 of 40 (5.0%) changed or added relevant lines in 3 files are covered.
No unchanged relevant lines lost coverage.
Overall coverage decreased (-0.6%) to 79.277%
Changes Missing Coverage
Covered Lines
Changed/Added Lines
%
src/whylogs/core/model_profile.py
1
4
25.0%
src/whylogs/core/metrics/nlp_metrics.py
0
35
0.0%
Totals
Change from base Build 980164614:
-0.6%
Covered Lines:
3352
Relevant Lines:
4062
💛 - Coveralls
Still need to add a notebook for the overall metrics, and some more tests
|
2025-04-01T06:40:56.003481
| 2024-02-14T22:03:27
|
2135275245
|
{
"authors": [
"TheMorningStarLucifer"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:11920",
"repo": "wickercar/foundry-ai-text-importer",
"url": "https://github.com/wickercar/foundry-ai-text-importer/issues/4"
}
|
gharchive/issue
|
AI wont validate
I have my AI key and I also have chat gpt4 yet when I click on validate it says I don't have access to GPT4. What am I doing wrong?
I have looked for some type of install guide, but there appears to be none.
|
2025-04-01T06:40:56.011719
| 2021-12-03T12:58:16
|
1070565154
|
{
"authors": [
"JohannesKarwou",
"codecov-commenter"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:11921",
"repo": "wiederm/transformato",
"url": "https://github.com/wiederm/transformato/pull/40"
}
|
gharchive/pull-request
|
Option for choosing OpenCL
new option in the config file:
GPU = True, openMM uses CUDA
GPU = False, openMM uses CPU
GPU = OpenCL, openMM uses OpenCL
Codecov Report
Merging #40 (91c0672) into master (9c62c39) will decrease coverage by 0.50%.
The diff coverage is 10.52%.
I would prefer a routine that checks if CUDA is available and if it is, it will use CUDA, otherwise it will fall back to openCL. Here are some examples in which people have done this before: https://programtalk.com/python-examples/simtk.openmm.Platform.getPlatformByName/
There is an option already implemented by CHARMM-GUI I think we can use that, I will try it soon
# Set platform
DEFAULT_PLATFORMS = "CUDA", "OpenCL", "CPU"
enabled_platforms = [
Platform.getPlatform(i).getName() for i in range(Platform.getNumPlatforms())
]
if args.platform:
if not args.platform[0] in enabled_platforms:
print(
"Unable to find OpenMM platform '{}'; exiting".format(args.platform[0]),
file=sys.stderr,
)
sys.exit(1)
platform = Platform.getPlatformByName(args.platform[0])
else:
for platform in DEFAULT_PLATFORMS:
if platform in enabled_platforms:
platform = Platform.getPlatformByName(platform)
break
if isinstance(platform, str):
print(
"Unable to find any OpenMM platform; exiting".format(args.platform[0]),
file=sys.stderr,
)
sys.exit(1)
|
2025-04-01T06:40:56.055406
| 2017-01-23T07:59:00
|
202467181
|
{
"authors": [
"bstansberry",
"jfdenise",
"jtymel",
"wildfly-ci"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:11922",
"repo": "wildfly/wildfly-core",
"url": "https://github.com/wildfly/wildfly-core/pull/2107"
}
|
gharchive/pull-request
|
[WFCORE-2226] Add @Test annotation to a test in ValueTypeCompletionTestCase
https://issues.jboss.org/browse/WFCORE-2226
Can one of the admins verify this patch?
@jtymel , this patch is fine. Thanks for the fix.
this is ok to test
|
2025-04-01T06:40:56.066753
| 2024-06-28T01:39:16
|
2379383055
|
{
"authors": [
"Tobyntobyn",
"victor-wildlife"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:11923",
"repo": "wildlifeai/Seeed_Grove_Vision_AI_Module_V2",
"url": "https://github.com/wildlifeai/Seeed_Grove_Vision_AI_Module_V2/pull/2"
}
|
gharchive/pull-request
|
Timed interrupt fatfs
Lots of automatic reformatting has happened, ignore most changes as the code structure hasn't changed.
Changes that have been made:
Timed_interrupt_fatfs app added to scenario_apps
Seeed_sample app added to scenario_apps
i2c_slave_app added to scenario_apps
I've merged main from the Himax team to the repo upto date
Pull this branch locally, then within the makefile, update the APP_TYPE for desired app to run.
Timed_interrupt_fatfs requires an SDcard to be inserted to the device in order to run.
Closing this PR as no need to be merged anymore but leaving as a potential reference source
|
2025-04-01T06:40:56.075027
| 2020-12-14T22:30:58
|
766984398
|
{
"authors": [
"Lominean",
"brandonb927",
"inavarrorubio",
"salomvary",
"sebastianhaas",
"wildlyinaccurate"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:11924",
"repo": "wildlyinaccurate/jekyll-responsive-image",
"url": "https://github.com/wildlyinaccurate/jekyll-responsive-image/issues/101"
}
|
gharchive/issue
|
Jekyll 4.2.0 breaks this plugin
I just upgraded to Jekyll 4.2.0 (which was released recently) and noticed that I get an error upon attempting to run my project.
Here is the stacktrace:
Configuration file: [...]/_config.yml
Configuration file: [...]/_config_dev.yml
Source: [...]
Destination: [...]/build_dev
Incremental build: disabled. Enable with --incremental
Generating...
Creating output directory [...]/build_dev/assets/media/r
Generating [...]/build_dev/assets/media/r/imagename.jpg
Liquid Exception: undefined method `filter_cache' for nil:NilClass in [...]/_posts/postname.md
bundler: failed to load command: jekyll ([...]/.rbenv/versions/2.6.3/bin/jekyll)
NoMethodError: undefined method `filter_cache' for nil:NilClass
[...]lib/ruby/gems/2.6.0/gems/jekyll-4.2.0/lib/jekyll/filters.rb:425:in `item_property'
[...]lib/ruby/gems/2.6.0/gems/jekyll-4.2.0/lib/jekyll/filters.rb:385:in `block in sort_input'
[...]lib/ruby/gems/2.6.0/gems/jekyll-4.2.0/lib/jekyll/filters.rb:385:in `map'
[...]lib/ruby/gems/2.6.0/gems/jekyll-4.2.0/lib/jekyll/filters.rb:385:in `sort_input'
[...]lib/ruby/gems/2.6.0/gems/jekyll-4.2.0/lib/jekyll/filters.rb:320:in `sort'
[...]lib/ruby/gems/2.6.0/gems/liquid-4.0.3/lib/liquid/strainer.rb:56:in `invoke'
[...]lib/ruby/gems/2.6.0/gems/liquid-4.0.3/lib/liquid/context.rb:86:in `invoke'
[...]lib/ruby/gems/2.6.0/gems/liquid-4.0.3/lib/liquid/variable.rb:84:in `block in render'
[...]lib/ruby/gems/2.6.0/gems/liquid-4.0.3/lib/liquid/variable.rb:82:in `each'
[...]lib/ruby/gems/2.6.0/gems/liquid-4.0.3/lib/liquid/variable.rb:82:in `inject'
[...]lib/ruby/gems/2.6.0/gems/liquid-4.0.3/lib/liquid/variable.rb:82:in `render'
[...]lib/ruby/gems/2.6.0/gems/liquid-4.0.3/lib/liquid/tags/assign.rb:26:in `render'
[...]lib/ruby/gems/2.6.0/gems/liquid-4.0.3/lib/liquid/block_body.rb:103:in `render_node_to_output'
[...]lib/ruby/gems/2.6.0/gems/liquid-4.0.3/lib/liquid/block_body.rb:91:in `render'
[...]lib/ruby/gems/2.6.0/gems/liquid-4.0.3/lib/liquid/template.rb:208:in `block in render'
[...]lib/ruby/gems/2.6.0/gems/liquid-4.0.3/lib/liquid/template.rb:242:in `with_profiling'
[...]lib/ruby/gems/2.6.0/gems/liquid-4.0.3/lib/liquid/template.rb:207:in `render'
[...]lib/ruby/gems/2.6.0/gems/liquid-4.0.3/lib/liquid/template.rb:220:in `render!'
[...]lib/jekyll-responsive-image/renderer.rb:28:in `render_responsive_image'
[...]lib/jekyll-responsive-image/tag.rb:16:in `render'
[...]lib/ruby/gems/2.6.0/gems/liquid-4.0.3/lib/liquid/block_body.rb:103:in `render_node_to_output'
[...]lib/ruby/gems/2.6.0/gems/liquid-4.0.3/lib/liquid/block_body.rb:91:in `render'
[...]lib/ruby/gems/2.6.0/gems/liquid-4.0.3/lib/liquid/template.rb:208:in `block in render'
[...]lib/ruby/gems/2.6.0/gems/liquid-4.0.3/lib/liquid/template.rb:242:in `with_profiling'
[...]lib/ruby/gems/2.6.0/gems/liquid-4.0.3/lib/liquid/template.rb:207:in `render'
[...]lib/ruby/gems/2.6.0/gems/liquid-4.0.3/lib/liquid/template.rb:220:in `render!'
[...]lib/ruby/gems/2.6.0/gems/jekyll-4.2.0/lib/jekyll/liquid_renderer/file.rb:39:in `block (3 levels) in render!'
[...]lib/ruby/gems/2.6.0/gems/jekyll-4.2.0/lib/jekyll/liquid_renderer/file.rb:59:in `measure_counts'
[...]lib/ruby/gems/2.6.0/gems/jekyll-4.2.0/lib/jekyll/liquid_renderer/file.rb:38:in `block (2 levels) in render!'
[...]lib/ruby/gems/2.6.0/gems/jekyll-4.2.0/lib/jekyll/liquid_renderer/file.rb:63:in `measure_bytes'
[...]lib/ruby/gems/2.6.0/gems/jekyll-4.2.0/lib/jekyll/liquid_renderer/file.rb:37:in `block in render!'
[...]lib/ruby/gems/2.6.0/gems/jekyll-4.2.0/lib/jekyll/liquid_renderer/file.rb:70:in `measure_time'
[...]lib/ruby/gems/2.6.0/gems/jekyll-4.2.0/lib/jekyll/liquid_renderer/file.rb:36:in `render!'
[...]lib/ruby/gems/2.6.0/gems/jekyll-4.2.0/lib/jekyll/renderer.rb:131:in `render_liquid'
[...]lib/ruby/gems/2.6.0/gems/jekyll-4.2.0/lib/jekyll/renderer.rb:80:in `render_document'
[...]lib/ruby/gems/2.6.0/gems/jekyll-4.2.0/lib/jekyll/renderer.rb:63:in `run'
[...]lib/ruby/gems/2.6.0/gems/jekyll-4.2.0/lib/jekyll/site.rb:547:in `render_regenerated'
[...]lib/ruby/gems/2.6.0/gems/jekyll-4.2.0/lib/jekyll/site.rb:532:in `block (2 levels) in render_docs'
[...]lib/ruby/gems/2.6.0/gems/jekyll-4.2.0/lib/jekyll/site.rb:531:in `each'
[...]lib/ruby/gems/2.6.0/gems/jekyll-4.2.0/lib/jekyll/site.rb:531:in `block in render_docs'
[...]lib/ruby/gems/2.6.0/gems/jekyll-4.2.0/lib/jekyll/site.rb:530:in `each_value'
[...]lib/ruby/gems/2.6.0/gems/jekyll-4.2.0/lib/jekyll/site.rb:530:in `render_docs'
[...]lib/ruby/gems/2.6.0/gems/jekyll-4.2.0/lib/jekyll/site.rb:210:in `render'
[...]lib/ruby/gems/2.6.0/gems/jekyll-4.2.0/lib/jekyll/site.rb:80:in `process'
[...]lib/ruby/gems/2.6.0/gems/jekyll-4.2.0/lib/jekyll/command.rb:28:in `process_site'
[...]lib/ruby/gems/2.6.0/gems/jekyll-4.2.0/lib/jekyll/commands/build.rb:65:in `build'
[...]lib/ruby/gems/2.6.0/gems/jekyll-4.2.0/lib/jekyll/commands/build.rb:36:in `process'
[...]lib/ruby/gems/2.6.0/gems/jekyll-4.2.0/lib/jekyll/command.rb:91:in `block in process_with_graceful_fail'
[...]lib/ruby/gems/2.6.0/gems/jekyll-4.2.0/lib/jekyll/command.rb:91:in `each'
[...]lib/ruby/gems/2.6.0/gems/jekyll-4.2.0/lib/jekyll/command.rb:91:in `process_with_graceful_fail'
[...]lib/ruby/gems/2.6.0/gems/jekyll-4.2.0/lib/jekyll/commands/build.rb:18:in `block (2 levels) in init_with_program'
[...]lib/ruby/gems/2.6.0/gems/mercenary-0.4.0/lib/mercenary/command.rb:221:in `block in execute'
[...]lib/ruby/gems/2.6.0/gems/mercenary-0.4.0/lib/mercenary/command.rb:221:in `each'
[...]lib/ruby/gems/2.6.0/gems/mercenary-0.4.0/lib/mercenary/command.rb:221:in `execute'
[...]lib/ruby/gems/2.6.0/gems/mercenary-0.4.0/lib/mercenary/program.rb:44:in `go'
[...]lib/ruby/gems/2.6.0/gems/mercenary-0.4.0/lib/mercenary.rb:21:in `program'
[...]lib/ruby/gems/2.6.0/gems/jekyll-4.2.0/exe/jekyll:15:in `<top (required)>'
[...]bin/jekyll:23:in `load'
[...]bin/jekyll:23:in `<top (required)>'
If I have some time I might file a PR as I have some other modifications in a fork I'd like to have considered upstream.
fights off bot
fights off bot
Same problem here.
Same problem here.
Sorry for such a slow reply on this. I'm not sure whether it's a Jekyll or Liquid change, but it's frustrating to have a breaking change regardless! I'll take a look at #103 and see whether I can get a release ready ASAP.
Sorry for such a slow reply on this. I'm not sure whether it's a Jekyll or Liquid change, but it's frustrating to have a breaking change regardless! I'll take a look at #103 and see whether I can get a release ready ASAP.
Does anyone have a sample site/config that shows this issue? I can't reproduce it.
Does anyone have a sample site/config that shows this issue? I can't reproduce it.
Does anyone have a sample site/config that shows this issue? I can't reproduce it.
Yes, see https://github.com/wildlyinaccurate/jekyll-responsive-image/pull/103#discussion_r563534717
Does anyone have a sample site/config that shows this issue? I can't reproduce it.
Yes, see https://github.com/wildlyinaccurate/jekyll-responsive-image/pull/103#discussion_r563534717
Please don't, this is still an issue.Am 13.02.2021 13:13 schrieb "stale[bot]"<EMAIL_ADDRESS>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
—You are receiving this because you are subscribed to this thread.Reply to this email directly, view it on GitHub, or unsubscribe.
|
2025-04-01T06:40:56.094638
| 2019-10-21T17:02:11
|
510123050
|
{
"authors": [
"amatsukawa",
"selflein",
"williamFalcon"
],
"license": "apache-2.0",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:11926",
"repo": "williamFalcon/pytorch-lightning",
"url": "https://github.com/williamFalcon/pytorch-lightning/issues/404"
}
|
gharchive/issue
|
Multiple optimizers but only one loss
Hey! I have a question regarding this library. Really, like how it forces me to structure my code better. I encountered one problem I did not know how to solve based on the documentation.
Let's say I have two optimizers for two parts of the network, e.g. my configure_optimizers() looks like this:
def configure_optimizers(self):
optimizer_encoder = optim.Adam(self.encoder.parameters(), ...)
optimizer_decoder = optim.Adam(self.decoder.parameters(), ...)
return [optimizer_encoder, optimizer_decoder]
now in the training loop I forward pass the encoder, then the decoder and compute my loss based on the output:
def training_step(self, batch, batch_nb, optimizer_idx):
inp, gt = ...
encoding = self.encoder(inp)
pred = self.decoder(encoding)
loss = F.mse_loss(pred, gt)
return {'loss': loss}
Since I have two optimizers I have to respect that this function is called two times with different optimizer_idx however I just have one loss to backprop. How would I go about this?
What have you tried?
I tried something like this
def training_step(self, batch, batch_nb, optimizer_idx):
if optimizer_idx == 1:
return {}
inp, gt = ...
encoding = self.encoder(inp)
pred = self.decoder(encoding)
loss = F.mse_loss(pred, gt)
return {'loss': loss}
However, this leads to an error since no loss key is present in trainer.py:1392.
in that case, just pass in both sets of params to a single optimizer
But I explicitly want two different learning rates for different parts of the network. That is not really possible with a single optimizer AFAIK. One possibility could be to scale gradients on the weights for which I want lower learning rate before running the optimizer but that is really not a a clean solution.
It is possible with parameter groups using a single optimizer. Your use-case is actually the example in the docs: https://pytorch.org/docs/stable/optim.html#per-parameter-options
That’s nice. Thank you for hint!
|
2025-04-01T06:40:56.106110
| 2024-01-26T19:39:18
|
2102790901
|
{
"authors": [
"Redoxahmii",
"loeffel-io",
"matmilbury"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:11927",
"repo": "williamboman/mason-lspconfig.nvim",
"url": "https://github.com/williamboman/mason-lspconfig.nvim/issues/352"
}
|
gharchive/issue
|
Tailwind Language Server Slow
Problem description
Tailwind Language server is very slow compared to the other servers that i am using causing a delay of nearly 1 second before it displays anything. I am using LazyVim and have not really changed anything.
Why do you think this is an issue with mason-lspconfig.nvim?
The issue seems to be with how the lsp server is being handled but i am not completely aware of it so any help is appreciated.
Neovim version (>= 0.7)
NVIM v0.9.5
Build type: Release
LuaJIT 2.1.1702233742
Operating system/version
Linux redox-laptop 6.7.1-arch1-1 #1 SMP PREEMPT_DYNAMIC Sun, 21 Jan 2024 22:14:10 +0000 x86_64 GNU/Linux
I've manually reviewed the Nvim LPS client log (:LspLog) to find potential errors
[x] Yes
I've recently downloaded the latest plugin version of mason.nvim, mason-lspconfig.nvim, and nvim-lspconfig
[x] Yes
Affected language servers
Tailwind Language Server
Steps to reproduce
Just simply use it.
Actual behavior
some issue with how the information is handed back from the server but not sure.
Expected behavior
should be faster.
LspInfo
Language client log: /home/redox/.local/state/nvim/lsp.log
Detected filetype: typescriptreact
4 client(s) attached to this buffer:
Client: tsserver (id: 1, bufnr: [269, 3])
filetypes: javascript, javascriptreact, javascript.jsx, typescript, typescriptreact, typescript.tsx
autostart: true
root directory: /home/redox/Code/Native/BloodDonation
cmd: /home/redox/.local/share/nvim/mason/bin/typescript-language-server --stdio
Client: emmet_language_server (id: 2, bufnr: [269, 3])
filetypes: css, eruby, html, htmldjango, javascriptreact, less, pug, sass, scss, typescriptreact
autostart: true
root directory: /home/redox/Code/Native/BloodDonation
cmd: /home/redox/.local/share/nvim/mason/bin/emmet-language-server --stdio
Client: tailwindcss (id: 3, bufnr: [269, 3])
filetypes: aspnetcorerazor, astro, astro-markdown, blade, clojure, django-html, htmldjango, edge, eelixir, elixir, ejs, erb, eruby, gohtml, gohtmltmpl, haml, handlebars, hbs, html, html-eex, heex, jade, leaf, liquid, mdx, mustache, njk, nunjucks, php, razor, slim, twig, css, less, postcss, sass, scss, stylus, sugarss, javascript, javascriptreact, reason, rescript, typescript, typescriptreact, vue, svelte
autostart: true
root directory: /home/redox/Code/Native/BloodDonation
cmd: /home/redox/.local/share/nvim/mason/bin/tailwindcss-language-server --stdio
Client: copilot (id: 4, bufnr: [269, 3])
filetypes:
autostart: false
root directory: /home/redox/Code/Native/BloodDonation
cmd: node /home/redox/.local/share/nvim/lazy/copilot.lua/copilot/index.js
Other clients that match the filetype: typescriptreact
Config: eslint
filetypes: javascript, javascriptreact, javascript.jsx, typescript, typescriptreact, typescript.tsx, vue, svelte, astro
root directory: Not found.
cmd: /home/redox/.local/share/nvim/mason/bin/vscode-eslint-language-server --stdio
cmd is executable: true
autostart: true
custom handlers: eslint/openDoc, eslint/noLibrary, eslint/probeFailed, eslint/confirmESLintExecution
Configured servers list: lua_ls, pyright, marksman, cssls, emmet_language_server, eslint, tailwindcss, jsonls, ruff_lsp, yamlls, html, tsserver, volar
LspLog
No response
Healthcheck
mason: require("mason.health").check()
mason.nvim ~
- OK mason.nvim version v1.9.0
- OK PATH: prepend
- OK Providers:
mason.providers.registry-api
mason.providers.client
- OK neovim version >= 0.7.0
mason.nvim [Registries] ~
- OK Registry `github.com/mason-org/mason-registry version: 2024-01-26-net-canoe` is installed.
mason.nvim [Core utils] ~
- OK unzip: `UnZip 6.00 of 20 April 2009, by Info-ZIP. Maintained by C. Spieler. Send`
- OK wget: `GNU Wget 1.21.4 built on linux-gnu.`
- OK curl: `curl 8.5.0 (x86_64-pc-linux-gnu) libcurl/8.5.0 OpenSSL/3.2.0 zlib/1.3.1 brotli/1.1.0 zstd/1.5.5 libidn2/2.3.4 libpsl/0.21.2 (+libidn2/2.3.4) libssh2/1.11.0 nghttp2/1.59.0`
- OK gzip: `gzip 1.13`
- OK tar: `tar (GNU tar) 1.35`
- OK bash: `GNU bash, version 5.2.26(1)-release (x86_64-pc-linux-gnu)`
- OK sh: `Ok`
mason.nvim [Languages] ~
- WARNING Go: not available
- ADVICE:
- spawn: go failed with exit code - and signal -. go is not executable
- WARNING Composer: not available
- ADVICE:
- spawn: composer failed with exit code - and signal -. composer is not executable
- WARNING PHP: not available
- ADVICE:
- spawn: php failed with exit code - and signal -. php is not executable
- WARNING Ruby: not available
- ADVICE:
- spawn: ruby failed with exit code - and signal -. ruby is not executable
- WARNING RubyGem: not available
- ADVICE:
- spawn: gem failed with exit code - and signal -. gem is not executable
- OK node: `v20.10.0`
- OK cargo: `cargo 1.75.0`
- WARNING julia: not available
- ADVICE:
- spawn: julia failed with exit code - and signal -. julia is not executable
- OK python: `Python 3.11.6`
- OK luarocks: `/usr/bin/luarocks 3.9.2`
- OK java: `openjdk version "17.0.10" 2024-01-16`
- OK javac: `javac 17.0.10`
- OK npm: `10.2.3`
- OK pip: `pip 23.3.2 from /usr/lib/python3.11/site-packages/pip (python 3.11)`
- OK python venv: `Ok`
mason.nvim [GitHub] ~
- OK GitHub API rate limit. Used: 4. Remaining: 56. Limit: 60. Reset: Sat 27 Jan 2024 01:26:03 AM PKT.
Install and authenticate via gh-cli to increase rate limit.
Screenshots or recordings
No response
I can confirm. When editing a simple markdown file, my neovim freezes for a few seconds. Then reacts to input, then freezes again. It's unusable.
Uninstalling the tailwind lsp server alleviated the issue.
Maybe the issue comes from mason is installing a very old version for tailwindcss lsp:
✓ tailwindcss-language-server tailwindcss
Language Server Protocol implementation for Tailwind CSS.
installed version 0.0.27
homepage https://github.com/tailwindlabs/tailwindcss-intellisense
languages CSS
categories LSP
executables tailwindcss-language-server
current version: 0.12.6
From what i've heard before the issue with tailwind-lsp comes from the fact that there isn't any way to control the results being sent back and nvim-cmp unfortunately also lacks the ability at the moment to handle them asynchronously.
The response being sent by the lsp is huge which causes the hiccup in the editor as @matmilbury has mentioned.
For those that might end up here i would recommend using yionkes fork for nvim-cmp that implements this and also give a try to blink.cmp.
Blink is very new at the moment and needs more polish but if you can make do without a lot of external cmp sources you'd be more than happy with it.
It does not seem like tailwind would provide this option of restricting outputs anytime in the near future so most likely this issue would need to be handled by nvim-cmp. I'm currently using the fork from yionke and it works absolutely fine for me with tailwind.
|
2025-04-01T06:40:56.121985
| 2022-02-11T22:20:05
|
1133194155
|
{
"authors": [
"aryzing",
"williamboman"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:11928",
"repo": "williamboman/nvim-lsp-installer",
"url": "https://github.com/williamboman/nvim-lsp-installer/issues/476"
}
|
gharchive/issue
|
yamlls: Unable to find executable
Problem description
yamlls server is not running, fails to find executable.
Config: yamlls
filetypes: yaml, yaml.docker-compose
root directory: /home/aryzing/workspace/project
cmd: yaml-language-server --stdio
cmd is executable: Unable to find executable. Please check your path and ensure the server is installed
autostart: true
custom handlers:
Neovim version (>= 0.6)
NVIM v0.7.0-dev+1048-gdba1df635
Build type: RelWithDebInfo
LuaJIT 2.1.0-beta3
Operating system/version
Linux linux 5.11.0-49-generic #55-Ubuntu SMP Wed Jan 12 17:36:34 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
I've recently downloaded the latest plugin version of both nvim-lsp-installer and nvim-lspconfig
[X] Yes
Affected language servers
yamlls
Actual behavior
yamlls is not running
Expected behavior
For yamlls to run
LspInstallInfo output
✓ yamlls
installed 12 Feb 2022 00:14
filetypes yaml, yaml.docker-compose
path ~/.local/share/nvim/lsp_servers/yaml
homepage https://github.com/redhat-developer/yaml-language-server
↓ Server configuration schema (press enter to collapse)
→ redhat.telemetry.enabled default: null
→ yaml.completion default: true
→ yaml.customTags default: []
→ yaml.disableAdditionalProperties default: false
→ yaml.format.bracketSpacing default: true
→ yaml.format.enable default: true
→ yaml.format.printWidth default: 80
→ yaml.format.proseWrap default: "preserve"
→ yaml.format.singleQuote default: false
→ yaml.hover default: true
→ yaml.maxItemsComputed default: 5000
→ yaml.schemaStore.enable default: true
→ yaml.schemaStore.url default: "https:\/\/www.schemastore.org\/api\/json\/catalog.json"
→ yaml.schemas default: {}
→ yaml.trace.server default: "off"
→ yaml.validate default: true
Installation log
Believe these are the relevant lines, let me know if you need more. These match the timestamp above, and correspond to the current installed instance that's not working.
[INFO Sat 12 Feb 2022 00:14:49 EET] ...-installer/lua/nvim-lsp-installer/ui/status-win/init.lua:644: Starting install server_name="yamlls", requested_version=""
[INFO Sat 12 Feb 2022 00:14:51 EET] ...-installer/lua/nvim-lsp-installer/ui/status-win/init.lua:663: Installation completed server_name="yamlls", success=true
Healthcheck
nvim-lsp-installer: require("nvim-lsp-installer.health").check()
========================================================================
## nvim-lsp-installer report
- OK: neovim version >= 0.6.0
- WARNING: **Go**: not available
- WARNING: **Ruby**: not available
- WARNING: **RubyGem**: not available
- WARNING: **Composer**: not available
- WARNING: **PHP**: not available
- WARNING: **javac**: not available
- WARNING: **julia**: not available
- OK: **sh**: `Ok`
- OK: **bash**: `GNU bash, version 5.1.4(1)-release (x86_64-pc-linux-gnu)`
- OK: **tar**: `tar (GNU tar) 1.34`
- OK: **gzip**: `gzip 1.10`
- OK: **curl**: `curl 7.74.0 (x86_64-pc-linux-gnu) libcurl/7.74.0 OpenSSL/1.1.1j zlib/1.2.11 brotli/1.0.9 libidn2/2.3.0 libpsl/0.21.0 (+libidn2/2.3.0) libssh/0.9.5/openssl/zlib nghttp2/1.43.0 librtmp/2.3`
- OK: **wget**: `GNU Wget 1.21 built on linux-gnu.`
- OK: **python3**: `Python 3.10.1`
- OK: **node**: `v17.1.0`
- OK: **java**: `Ok`
- OK: **npm**: `8.1.2`
- OK: **pip3**: `pip 21.2.4 from /usr/local/lib/python3.10/site-packages/pip (python 3.10)`
I did, the docs are great. Here's some relevant code from my config,
local servers = {
-- [other servers omitted from this snippet]
"yamlls",
}
for _, name in pairs(servers) do
local server_is_found, server = lsp_installer.get_server(name)
if server_is_found then
if not server:is_installed() then
print("Installing " .. name)
server:install()
end
end
end
lsp_installer.on_server_ready(function(server)
local serverOpts = {
on_attach = on_attach, -- [properly defined, but not included in this snippet]
}
-- [other servers omitted from this snippet]
if server.name == "yamlls" then
serverOpts.settings = {
yaml = {
schemas = {
["https://raw.githubusercontent.com/OAI/OpenAPI-Specification/main/schemas/v3.1/schema.json"] = "/**/openapi.yaml",
},
},
}
end
server:setup(serverOpts)
end)
Does it start and attach if you try the following (this creates a new git repository in a tmp dir)?
$ cd `mktemp -d`
$ git init
$ touch test.yml
$ nvim test.yml
It does, it starts and attaches successfully.
In seeing the git init command above, I thought I'd mention that the yaml file that made me report this issue is in a git submodule. Just tried it with a new yaml file at the root of the parent git repo and it attached just fine, completion working too.
I think the Unable to find executable. message in :LspInfo is not always 100% true - sometimes it has a tendency to report that the executable was not found when the issue is something else. I believe one cause is when the server fails to properly start - can you find anything of interest in the LSP logs? exe 'tabnew ' .. luaeval("vim.lsp.get_log_path()")
|
2025-04-01T06:40:56.130005
| 2022-10-13T17:23:51
|
1408172348
|
{
"authors": [
"anjanesh",
"msantoshk"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:11929",
"repo": "willmeyers/django-bunny-storage",
"url": "https://github.com/willmeyers/django-bunny-storage/issues/5"
}
|
gharchive/issue
|
Does this support Django 4.1 ?
Framework :: Django :: 4.0
Not tested yet but for trial, you can go for Django 4.0 and you can raise the issues here.
|
2025-04-01T06:40:56.148249
| 2023-08-31T22:17:09
|
1876383493
|
{
"authors": [
"AndPotap",
"mfinzi"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:11930",
"repo": "wilson-labs/cola",
"url": "https://github.com/wilson-labs/cola/pull/37"
}
|
gharchive/pull-request
|
Gpu allocation tests
Added a series of tests for the to(device) functionality across several of our LinearOperators.
I think you want from linalg.operator_market import op_names, get_test_operator
rather than from test.linalg.operator_market import op_names, get_test_operator.
(see e.g. the example tests in test_decomps.py)
|
2025-04-01T06:40:56.152303
| 2022-10-24T23:24:29
|
1421607222
|
{
"authors": [
"MrNova111",
"wiltonsr"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:11931",
"repo": "wiltonsr/ldapAuth",
"url": "https://github.com/wiltonsr/ldapAuth/issues/24"
}
|
gharchive/issue
|
Customize settings per container
Apologies if I am missing something in documentation or examples, but is there a straight forward way to have per container settings (for example, a different set of Allowed Groups) without duplicating common settings such as LDAP URL?
Hi @MrNova111,
Thanks for your interest in ldapAuth.
Apologies if I am missing something in documentation or examples, but is there a straight forward way to have per container settings (for example, a different set of Allowed Groups) without duplicating common settings such as LDAP URL?
Unfortunately, there isn't. If you try to overwrite the middleware configs traefik will return an error like this:
traefik | time="2022-10-25T13:17:18Z" level=error msg="Middleware defined multiple times with different configurations in [...]" providerName=docker middlewareName=ldap_auth
I believe I may have figured out a solution that uses go templating. In my configuration file I defined a template that contains all my common settings, and then created a middleware instance for each container router that references the common template:
{{define "ldapTemplate"}}Url: ldaps://example.org{{end}}
{{define "ldapConfig"}}http:
middlewares:
ui-ldapAuth:
plugin:
ldapAuth:
LogLevel: DEBUG
{{template "ldapTemplate"}}
AllowedGroups:
- groupA
web-ldapAuth:
plugin:
ldapAuth:
LogLevel: DEBUG
{{template "ldapTemplate"}}
AllowedGroups:
- groupB
{{end}}
{{template "ldapConfig"}}
Then I simply assign each container service its own middleware:
version: '3.5'
services:
traefik:
image: traefik:v2.9
volumes:
- ./traefik.yml:/etc/traefik/traefik.yml:ro
- ./ldapAuth-conf.yml:/dynamic-conf/ldapAuth-conf.yml:ro
- /var/run/docker.sock:/var/run/docker.sock:ro
ui:
labels:
- traefik.enable=true
- traefik.http.routers.ui.rule=Host(`ui.localhost`)
- traefik.http.routers.ui.tls=true
- traefik.http.routers.ui.middlewares=ui-ldapAuth@file
web:
labels:
- traefik.enable=true
- traefik.http.routers.web.rule=Host(`web.localhost`)
- traefik.http.routers.web.tls=true
- traefik.http.routers.web.middlewares=web-ldapAuth@file
Glad to know that worked for you.
Only for future reference the docs about traefik's go-templating could be found here.
|
2025-04-01T06:40:56.169426
| 2021-10-04T01:31:48
|
1014635665
|
{
"authors": [
"JohnCampionJr",
"antfu"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:11932",
"repo": "windicss/windicss",
"url": "https://github.com/windicss/windicss/issues/489"
}
|
gharchive/issue
|
Console use is not allowing WindiCSS 3.1.8 to work in the browser any longer
I'm trying to use WindiCSS in the browser, similar to this:
https://github.com/antfu/windicss-runtime-dom
But anytime I import the Processor, I immediately get a
Uncaught ReferenceError: process is not defined
Repo
https://github.com/JohnCampionJr/vitesse-windicss-browser
Just added a couple of lines trying to bring in Windi
This is something in the changes between 3.1.8 and 3.1.7. Reverting to 3.1.7 makes the problem go away.
It is from the use of Console here.
https://github.com/windicss/windicss/commit/a042e030b87f37bea4a939f4fe92713920a3f9e1#diff-bbb20b2922ba3b91e8b0b876d68f8f379f263fe754b88ebb5b5b3079983fec6e
Added in this commit:
https://github.com/windicss/windicss/pull/426
Need a better way to warn users....
Scratch that, just saw PR #488
Released as v3.1.9
|
2025-04-01T06:40:56.200152
| 2019-09-26T15:50:19
|
498972402
|
{
"authors": [
"jazzdan",
"landism",
"nicks"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:11933",
"repo": "windmilleng/tilt",
"url": "https://github.com/windmilleng/tilt/issues/2263"
}
|
gharchive/issue
|
Old pod logs show up on startup
repro:
tilt up fortune
change the startup log message
kill tilt
tilt up fortune
(potentially repeat a few times)
observed:
we get the fortune build output, followed by the pod output from multiple fortune pods:
fortune ┊ STEP 3/3 — Deploying
fortune ┊ │ Injecting images into Kubernetes YAML
fortune ┊ │ Applying via kubectl:
fortune ┊ │ matt-fortune:deployment
fortune ┊
fortune ┊ │ Step 1 - 0.738s
fortune ┊ │ Step 2 - 0.000s
fortune ┊ │ Step 3 - 0.227s
fortune ┊ │ Done in: 0.965s
fortune ┊
fortune ┊ 2019/09/23 16:21:19 Starting Fortune Service on :8082
fortune ┊ 2019/09/23 16:22:55 Starting Fortune Service on :8082!!
fortune ┊ 2019/09/23 16:24:23 Starting Fortune Service on :8082
expected:
we get the fortune build output followed by the pod output from the current fortune pod
It's possible it's fine to show pod output from the previous fortune pod, but:
it should be prior to the build log in the tilt ui, since it preceded it chronologically. its current position following the build is very confusing
I don't think there's an argument for showing pod logs for pods that never existed at the same time as Tilt (and I'm kind of surprised / unsure how Tilt's even managing to get them)
This is observable more dramatically if the service was in a crash loop (you'll get startup logs from every crash!) or if the service had logged a lot (@jazzdan reported tilt was taking a lot of cpu dealing with old logs)
Originally written by @landism
I don't think there's an argument for showing pod logs for pods that never existed at the same time as Tilt (and I'm kind of surprised / unsure how Tilt's even managing to get them)
Silly me.
For this particular repro, these logs are simply coming from the pod that is running when Tilt starts. It has multiple startup messages because there were multiple live updates to the same pod.
I'm not sure what a good solution to this is.
fixed by #2287
|
2025-04-01T06:40:56.232550
| 2021-09-17T03:31:15
|
998891144
|
{
"authors": [
"CjHare"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:11934",
"repo": "windranger-io/windranger-solidity-template",
"url": "https://github.com/windranger-io/windranger-solidity-template/issues/14"
}
|
gharchive/issue
|
GitHub action to include TypeScript linting
Add a step in the master-push-pull GitHub action to perform TypeScript linting, propagating the status to fail || pass the build accordingly
Now completed!
https://github.com/windranger-io/windranger-solidity-template/blob/main/.github/workflows/master-push-pull.yml contains npm run lint, which currently lints the TypeScript
|
2025-04-01T06:40:56.239156
| 2021-09-06T11:53:40
|
989092498
|
{
"authors": [
"ChristianChiarulli",
"flaviusbuffon",
"windwp"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:11935",
"repo": "windwp/nvim-autopairs",
"url": "https://github.com/windwp/nvim-autopairs/issues/124"
}
|
gharchive/issue
|
i want to let latex file auto pair {} after enter a function, but now it auto pair ()
using neovim , and the plugin.
i need make this plugin adapt latex, to auto appear {} instead () like the picture below
https://github.com/windwp/nvim-autopairs/blob/master/lua/nvim-autopairs/completion/cmp.lua
you need to modify that line. if all function on latex will insert { so you can make a PR :+1:
can you help to modify it to achieve the goal
-------- 原始邮件 --------
发件人: windwp @.>
日期: 2021年9月6日周一 晚上8:56
收件人: windwp/nvim-autopairs @.>
抄送: Flavius Buffon @.>, Author @.>
主 题: Re: [windwp/nvim-autopairs] i want to let latex file auto pair {} after enter a function, but now it auto pair () (#124)
https://github.com/windwp/nvim-autopairs/blob/master/lua/nvim-autopairs/completion/cmp.lua
you need to modify that line. if all function on latex will insert { so you can make a PR 👍
—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHubhttps://github.com/windwp/nvim-autopairs/issues/124#issuecomment-913628368, or unsubscribehttps://github.com/notifications/unsubscribe-auth/AKQEFEABI52K5RVOXXDXRALUAS3BNANCNFSM5DQJKKOQ.
hope to get your support!
Would it be possible to disable map_complete per filetype?
I would like it active for everything but latex.
|
2025-04-01T06:40:56.242558
| 2023-06-28T12:01:58
|
1778797922
|
{
"authors": [
"9mm",
"TroySigX"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:11936",
"repo": "windwp/nvim-autopairs",
"url": "https://github.com/windwp/nvim-autopairs/issues/369"
}
|
gharchive/issue
|
Can we have the same deletion behavior as jiangmiao/auto-pairs?
Is your feature request related to a problem? Please describe.
For some pair of brackets where the only characters between them are space or newline, when deleting the opening bracket, it's best to also delete the ending bracket.
E.g:
{|
}
where | is the current cursor, after pressing <BS>, it's expected to also delete the ending bracket.
Describe the solution you'd like
continuously checking the characters from the current cursor to the ending bracket, if there are only spaces and newlines, enable deleting in pair.
Describe alternatives you've considered
None
I tried that plugin because of this issue @TroySigX ... after i did that it doesnt even seem to backspace across newlines.
When you use that plugin, does it actually do that?
Yes, it does delete across newlines. Here's my config:
require('nvim-treesitter.configs').setup({
endwise = {
enable = true,
},
autotag = {
enable = true,
},
})
require('npairs-int-upair').setup({
bs = 'u',
map = 'n',
})
local Rule = require('nvim-autopairs.rule')
local npairs = require('nvim-autopairs')
local cond = require('nvim-autopairs.conds')
npairs.add_rules({
Rule('$', '$', { 'tex', 'latex' }):with_move(cond.none()):with_del(cond.done()):with_cr(cond.done()),
})
|
2025-04-01T06:40:56.245348
| 2023-09-27T08:55:37
|
1915027255
|
{
"authors": [
"azolus",
"windwp"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:11937",
"repo": "windwp/windline.nvim",
"url": "https://github.com/windwp/windline.nvim/issues/63"
}
|
gharchive/issue
|
Is italic / boldItalic text supported?
Hi,
I wanted to ask if it is possible to italicize the text displayed. I am trying to do something like this:
foo_component = {
name = "foo",
text = function()
return { "foo", { "yellow", "ActiveBg", "bold,italic" } }
end,
}
So I want the text foo to be displayed in bold-italic font.
Unfortunately this doesn't work with my config (text is displayed in regular font). Changing it to plain "bold" however (like in the evil-line example) works just fine.
Am I missing something, or is "italic" / "bold,italic" text not yet supported?
It is not supported yet :).
tmaybe you can create your own highlight and by nvim_set_hl and use that name on component.
that nvim_set_hl have a lot of options. we only support fg,bg,bold.
|
2025-04-01T06:40:56.258905
| 2019-03-01T02:50:58
|
415916823
|
{
"authors": [
"jarone",
"petef19"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:11938",
"repo": "winstonjs/winston",
"url": "https://github.com/winstonjs/winston/issues/1608"
}
|
gharchive/issue
|
Ask a question: Can winston provide an api that reopens the log file?
My usage scenario:
I use pm2 to start my nodejs service, for example: 1 master and 4 workers.
Use winston to write to the same log file in each worker. For example, the file name is: access.log
Can I use logrotate on the Linux system to do the rotate log (without copytruncate ), and each time logrotate creates a new log file and renames the old log file, I notify each worker to reopen the log file.
+1
I'm in the exact same boat
is there a way to reopen a log ?
|
2025-04-01T06:40:56.275050
| 2023-04-29T17:19:22
|
1694074155
|
{
"authors": [
"01Kuzma",
"LukeTowers",
"mjauvin"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:11939",
"repo": "wintercms/wn-blog-plugin",
"url": "https://github.com/wintercms/wn-blog-plugin/issues/38"
}
|
gharchive/issue
|
After Update - An exception has been thrown during the rendering of a template
Winter CMS Build
dev-develop
PHP Version
8.1
Database engine
MySQL/MariaDB
Plugins installed
No response
Issue description
Just updated the working Winter CMS 1.2.1 with composer update and winter:up commands, and it's broken now.
Steps to replicate
So, I get the An exception has been thrown during the rendering of a template ("Undefined array key "about"") in "C:\laragon\www\winter3\themes\mytheme\partials\intro.htm" at line 3.
intro.htm points to hero1.htm which post list component:
[viewBag]
[blogPosts]
pageNumber = "{{ :page }}"
postsPerPage = 7
noPostsMessage = "No posts found"
sortOrder = "random"
categoryPage = "about"
postPage = "blog-post"
==
As I understand the error is pointing to categoryPage = "about" part.
Before the update, it was working
Workaround
No response
@mjauvin I think this issue is caused by the recent refactoring / improvements, are you able to replicate this at all?
Sorry, can't reproduce this.
@01Kuzma are you able to upload a basic version of your theme that contains just enough to replicate this issue in a fresh install of the winter develop branch and the latest blog plugin?
@petehalverson side note, would really love to have Octodock back in some capacity 😉
Maybe this one will help...
filmustudija.zip
I tried your theme, but it is in an unuseable state, plugins/partials missing, layout half beaten to death...
I managed to remove cruft to make it work, and didn't get the error you reported.
Please submit a theme in a usable state and a procedure to replicate your issue with it.
@mjauvin , that's strange. I'm getting some other errors with it, the frontend even is not loading throwing errors...
OK, I will try to remake it.
@mjauvin I've reviewed it, I don't know what to upload, because the theme is image dependent (pulls them from storage), without them, it looks empty (as you probably saw it).
I've just removed the partials with private information and excessive templates.
Removing the component form the page , of course, removes this error.
But I have another one, accessing page Portfolio with two components: Post list & Category List gives this:
Removing the Category List removes the error.
Can you show your Portfolio page ? Specifically, the url and the blogCategories component settings ?
I use this component without any problems on my latest website.
@LukeTowers I was able to generate an error with the blogCategories component when setting an invalid slug to the component slug property. What generates the error is this change:
- public $currentCategorySlug;
+ public string $currentCategorySlug = '';
PHP now throws an error if you assign null to this Class property because it now expects a string.
So basically, if you have the following page/component settings, it will throw an error:
url = /blog/:slug
layout = default
[blogCategories]
slug = "{{ :invalidSlug }}"
categoryPage = "blog"
==
Notice the {{ :invalidSlug }} assigned to the component's slug property when it should be {{ :slug }}
I suspect it's possible to trigger similar errors in other blog components as well because of the extra property validation that was added. This is not necessarily a bad thing, but might break badly written themes.
@mjauvin , here it is:
title = "Portfolio"
url = "/portfolio/:page?"
layout = "default"
meta_description = "Desc..."
is_hidden = 0
[blogPosts]
pageNumber = "{{ :page }}"
categoryFilter = "{{ :slug }}"
postsPerPage = 10
noPostsMessage = "Įrašų nerasta"
sortOrder = "published_at desc"
categoryPage = "blog-category"
postPage = "blog-post"
[blogCategories]
slug = "{{ :slug }}"
displayEmpty = 0
categoryPage = "blog-category"
==
{% set posts = blogPosts.posts %}
Just change:
[blogCategories]
slug = "{{ :slug }}"
To:
[blogCategories]
slug = "{{ :page }}"
To solve your issue.
@LukeTowers should we change the component like this to restore original behavior ?
diff --git a/components/Categories.php b/components/Categories.php
index 10a958c..b0609e3 100644
--- a/components/Categories.php
+++ b/components/Categories.php
@@ -17,12 +17,12 @@ class Categories extends ComponentBase
/**
* Reference to the page name for linking to categories.
*/
- public string $categoryPage = '';
+ public ?string $categoryPage = '';
/**
* Reference to the current category slug.
*/
- public string $currentCategorySlug = '';
+ public ?string $currentCategorySlug = '';
public function componentDetails(): array
{
@mjauvin it solves the portfolio issue.
Why did this happen? I've created this theme long time ago based on some tutorials, as I remember
And how to fix the main problem? What should I change here? "\partials\intro.htm" at line 3.
is pointing to hero-slider/hero1.htm with Post List component, which is:
[viewBag]
[blogPosts]
pageNumber = "{{ :page }}"
postsPerPage = 7
noPostsMessage = "No posts found"
sortOrder = "random"
categoryPage = "about"
postPage = "blog-post"
==
@mjauvin it solves the portfolio issue. Thank you! Why did this happen? I've created this theme long time ago based on some tutorials, as I remember
It happens because there are errors in your theme and the last update to the plugin introduced property validation for the components.
And how to fix the main problem? What should I change here? "\partials\intro.htm" at line 3. is pointing to hero-slider/hero1.htm with Post List component, which is:
Please, always give the full settings section of the page you ask help for, otherwise it's hard to help.
[viewBag]
[blogPosts]
pageNumber = "{{ :page }}"
postsPerPage = 7
noPostsMessage = "No posts found"
sortOrder = "random"
categoryPage = "about"
postPage = "blog-post"
==
@mjauvin , sorry, have edited the last post
@mjauvin , any thoughts regarding last issue?
|
2025-04-01T06:40:56.344897
| 2022-02-21T13:51:03
|
1145827443
|
{
"authors": [
"jamietanna",
"timtebeek"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:11940",
"repo": "wiremock/wiremock",
"url": "https://github.com/wiremock/wiremock/pull/1816"
}
|
gharchive/pull-request
|
Archunit 0.23.0
Closes #1815.
Method references are now picked up, leading to fewer false positives in detecting unused methods.
Required a minor code change to now use JavaCodeUnitAccess.
Updated store to only freeze the for now disabled unused methods test, as the unused public methods test now no longer has a false positive. We can reconsider whether that test needs to remain disabled, as there should no longer be false positives.
Ideally and eventually we can remove true unused methods through #1702; until then this minimal change allows us to pick up new releases of ArchUnit.
I'll try and have a look today, but @tomakehurst will still need to do the honours of merging as I've not yet got commit access ☺
I think this LGTM - happy for Tom to have another pair of eyes and merge when ready :+1:
Discussed outside GitHub; this one is good to go it seems! :)
|
2025-04-01T06:40:56.346670
| 2017-09-14T17:13:14
|
257793308
|
{
"authors": [
"NaveNO"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:11941",
"repo": "wiremod/wire",
"url": "https://github.com/wiremod/wire/issues/1466"
}
|
gharchive/issue
|
dsSend(name, group, data) not working?
Hi again. Here is my code:
Sender: https://pastebin.com/dVFaGdp2
Receiver: https://pastebin.com/DuRpP8GG
No print out message from receiver. Also I tried with Indicators, but the same. Am I doing something wrong?
Again issue was because of world saving system.
|
2025-04-01T06:40:56.414107
| 2024-04-02T18:22:11
|
2221151508
|
{
"authors": [
"bholmesdev",
"sarah11918"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:11942",
"repo": "withastro/docs",
"url": "https://github.com/withastro/docs/pull/7744"
}
|
gharchive/pull-request
|
Rework Markdoc info hierarchy
Description (required)
Restructure table-of-contents with clearer h2 sections. Before, everything was nested under Configuration. This breaks out related sections, and moves higher-traffic content (ex. how to use UI components) further up).
Before | After
Related issues & labels (optional)
Closes #
Suggested label:
@bholmesdev the table of contents does indeed look much nicer!
Just because the diff is going to look terrible here, and not really reflect what you actually did, is the section on Partials the only new/changed content (other than reordering?) This will save me a bunch of close reading trying to figure out what content actually did change, and will also help the translators who will have to make sense of this PR when they update in all the other languages. :smile:
Yes, apologies! Let's get the Partials PR reviewed first, then rebase this PR once it is merged. That way we don't have to untangle new content from reorganization.
@sarah11918 Okay, rebased and ready for review!
Great! Assuming this is just reorganization of existing content for flow, this now should be an easier read!
|
2025-04-01T06:40:56.421199
| 2024-04-14T17:53:15
|
2242286657
|
{
"authors": [
"astrobot-houston",
"thomasbnt"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:11943",
"repo": "withastro/docs",
"url": "https://github.com/withastro/docs/pull/7891"
}
|
gharchive/pull-request
|
i18n(fr): Updating guides/backend/supabase.mdx from #7767
Description (required)
Updating guides/backend/supabase.mdx from #7767
Related issues & labels (optional)
Closes #
Suggested label: i18n
Lunaria Status Overview
🌕 This pull request will trigger status changes.
Learn more
By default, every PR changing files present in the Lunaria configuration's files property will be considered and trigger status changes accordingly.
You can change this by adding one of the keywords present in the ignoreKeywords property in your Lunaria configuration file in the PR's title (ignoring all files) or by including a tracker directive in the merged commit's description.
Tracked Files
File
Note
Locale
src/content/docs/fr/guides/backend/supabase.mdx
Localization changed, will be marked as complete.
fr
Warnings reference
Icon
Description
🔄️
The source for this localization has been updated since the creation of this pull request, make sure all changes in the source have been applied.
|
2025-04-01T06:40:56.424074
| 2024-03-14T21:59:58
|
2187336664
|
{
"authors": [
"astrobot-houston",
"fl0wo"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:11944",
"repo": "withastro/starlight",
"url": "https://github.com/withastro/starlight/pull/1618"
}
|
gharchive/pull-request
|
Update showcase-sites.astro
Add DipSway starlight website as showcase example
Hello! Thank you for opening your first PR to Starlight! ✨
Here’s what will happen next:
Our GitHub bots will run to check your changes.
If they spot any issues you will see some error messages on this PR.
Don’t hesitate to ask any questions if you’re not sure what these mean!
In a few minutes, you’ll be able to see a preview of your changes on Vercel 🤩
One or more of our maintainers will take a look and may ask you to make changes.
We try to be responsive, but don’t worry if this takes a few days.
|
2025-04-01T06:40:56.426839
| 2024-09-08T08:40:14
|
2512268986
|
{
"authors": [
"astrobot-houston",
"delucis"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:11945",
"repo": "withastro/starlight",
"url": "https://github.com/withastro/starlight/pull/2303"
}
|
gharchive/pull-request
|
Convert URL to file path correctly for Git virtual module
Description
Closes #2302
Correctly resolves a path to an internal module to handle file paths with special characters
size-limit report 📦
Path
Size
/index.html
6.15 KB (0%)
/_astro/*.js
22.36 KB (0%)
/_astro/*.css
13.72 KB (0%)
|
2025-04-01T06:40:56.431656
| 2018-12-02T11:05:17
|
386549370
|
{
"authors": [
"witmoca"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:11946",
"repo": "witmoca/BEATs",
"url": "https://github.com/witmoca/BEATs/issues/11"
}
|
gharchive/issue
|
Archive table Enhancements
[x] Row index
[x] Rowsorter
[x] Multicolumn priority support for Rowsorter
[x] Pick and choose the shown columns
New Requirement: Multi column support standard style with icons
|
2025-04-01T06:40:56.447376
| 2023-01-26T17:57:07
|
1558541734
|
{
"authors": [
"jstawow",
"noomorph"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:11947",
"repo": "wix/Detox",
"url": "https://github.com/wix/Detox/issues/3876"
}
|
gharchive/issue
|
While i try to install mocha then configuration is not for mocha but for jest:
What happened?
precondition:
detox is installed in devDep
mocha is installed in devDep
detox-cli is installed in global
While i try to install mocha then configuration is not for mocha but for jest: output below.
npx detox init -r mocha
Created a file at path: .detoxrc.js
Created a file at path: e2e/jest.config.js
Created a file at path: e2e/starter.test.js
What i tried:
different installation of lib: by npm and yarn as well, i mean that i tried both: yarn and npm for all libs
What was the expected behaviour?
No response
Was it tested on latest Detox?
[X] I have tested this issue on the latest Detox release and it still reproduces.
Help us reproduce this issue!
No response
In what environment did this happen?
Detox version: 20.1.2
React Native version:
Has Fabric (React Native's new rendering system) enabled: (yes/no)
Node version: v14.17.6
npm: 8.9.0
yarn: 1.22.18
Test-runner (select one): jest / mocha
Detox logs
Detox logs
paste logs here!
Device logs
Device logs
paste logs here!
More data, please!
No response
Detox 20+ does not support Mocha.
There is a discussion in https://github.com/wix/Detox/issues/3772 if someone wants to try to create a third-party integration detox-mocha.
|
2025-04-01T06:40:56.449182
| 2020-10-27T15:12:00
|
730542796
|
{
"authors": [
"d4vidi"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:11948",
"repo": "wix/Detox",
"url": "https://github.com/wix/Detox/pull/2434"
}
|
gharchive/pull-request
|
Expand retry() API and improve exec() retry-logging
[x] This is a small change
[ ] This change has been discussed in issue #<?> and the solution has been agreed upon with maintainers.
Description:
Associated with the ongoing work on Genymotion-Cloud integration (#2429), where logging of command failures, and better-retry control, are required.
will reopen soon
|
2025-04-01T06:40:56.451740
| 2017-06-01T17:59:20
|
232961112
|
{
"authors": [
"artald",
"shergin"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:11949",
"repo": "wix/react-native-autogrow-textinput",
"url": "https://github.com/wix/react-native-autogrow-textinput/issues/19"
}
|
gharchive/issue
|
FYI: All multiline textinputs are autogrowing by default on iOS
LMK if you have questions.
I will also work on same functionality on Android, if you have ideas how it should be implemented, please share them with me.
@shergin that's great news! thanks for the update!
This means that I can just remove the manual height handling for iOS...
A couple of questions:
In what version of RN did this feature become available?
Can it be controlled via some prop? (turn it off/on for example)
Just published version 4.0.0 which uses the default RN implementation for auto expanding, Hopefully at some point this will be supported on Android as well so it can be simplified and all other hacks can be removed.
|
2025-04-01T06:40:56.458859
| 2017-12-17T10:49:27
|
282685136
|
{
"authors": [
"tempit"
],
"license": "BSD-3-Clause",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:11950",
"repo": "wix/stylable-intelligence",
"url": "https://github.com/wix/stylable-intelligence/issues/144"
}
|
gharchive/issue
|
Signature help for mixins crashes when value includes '(' or ')'
Either in a string or as a mixin param list.
Fixed with postcss-value-parser
|
2025-04-01T06:40:56.471099
| 2016-01-05T14:26:44
|
124976985
|
{
"authors": [
"dkomanov",
"grunzwei",
"hugebdu",
"noam-almog",
"viliusl"
],
"license": "bsd-3-clause",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:11951",
"repo": "wix/wix-embedded-mysql",
"url": "https://github.com/wix/wix-embedded-mysql/issues/36"
}
|
gharchive/issue
|
can't load .sql files from unexploded jars in classPathFiles
i'm using the embedded-mysql
there is a function
classPathFile
basically it does new File(resource.toURI)
this doesn't work for me when running maven, because everything is in a jar, so there is no file in the file that it can access
my jar depends on test-utils jar in test scope, and it has the mysql init code in main
this can be solved using java 8 filesystem
do you have your files packed in the classes resources or test resources ?
basically we load files from the classpath so if the are on other jars it's still possible to read them.
you load them by doing something equivalent to new File(resource.URI)
if the uri is internal to a jar, this doesn't work.
you can work around this using new java8 filesystem abstraction.
my files are packed in main/resources of a jar that i depend on in test scope, so it comes as an unexploded jar.
@grunzwei - ok, so I might just have to loadResourceAsStream() instead. Will have to add test to verify it. What I don't want to do yet is to make this library bound to java 8 - there is no real reason for that, so why not leave door open for poor java 7 users:)
@viliusl had we used guava we wouldn't have this issue, no ? ;)
Haven't check. Problem with guava I had was that I could not use most
recent version due to conflicts with framework, but maybe even not most
recent would cut it.
I will play with it once I will have time. Hopefully next week.
On Wed, Jan 6, 2016 at 10:46 AM, Noam Almog<EMAIL_ADDRESS>wrote:
@viliusl https://github.com/viliusl had we used guava we wouldn't have
this issue, no ? ;)
—
Reply to this email directly or view it on GitHub
https://github.com/wix/wix-embedded-mysql/issues/36#issuecomment-169268513
.
--
Vilius Lukošius
Software Engineer
Cell: +37061111226
33 Didžioji St./ 2 Rūdninkų St., Vilnius, Lithuania
@viliusl any progress on that?
I'm trying to migrate projects to wix-embedded-mysql and looks like it is an issue.
@hugebdu https://github.com/wix/wix-embedded-mysql/commit/48a0ebc93799eaf48af81bc808f54ba67ea5a044
@hugebdu and the little script:
def loadResources(path: String): Seq[String] = {
if (path.isEmpty) {
Nil
} else {
val resources = new PathMatchingResourcePatternResolver().getResources(path)
resources.sortBy(_.getFilename).map(r => IOUtils.toString(r.getInputStream, java.nio.charset.Charset.forName("UTF-8")))
}
}
Use like reloadSchema(aSchemaConfig... withCommand(loadResources("classpath:*.sql")))
@dkomanov your commit is not yet merged, right?
My PR merged: https://github.com/wix/wix-embedded-mysql/pull/43
@hugebdu - I did an impl for this (https://github.com/wix/wix-embedded-mysql/commits/scripts-in-jar), but still thinking on naming and apis - which should be deprecated/supported. So @dkomanov solution will work for you right now.
|
2025-04-01T06:40:56.533378
| 2020-01-15T11:09:13
|
550119968
|
{
"authors": [
"mfn",
"spawnia",
"steamboatid",
"wizacedric"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:11952",
"repo": "wizaplace/phpunit-slicer",
"url": "https://github.com/wizaplace/phpunit-slicer/pull/8"
}
|
gharchive/pull-request
|
Do not extend final class TestRunner and just copy-and-paste its contents
This is definitely a hack and a maintenance nightmare - but it works for now.
We can look for a better alternative later, see https://github.com/wizaplace/phpunit-slicer/issues/7
I learned about phpunit-slicer from your issue over at https://github.com/sebastianbergmann/phpunit/issues/4121
I've been using this PR (git commit hash 306846a872ca4ed8c2ec8b5592cfe29d28a8d415) a few weeks now since moving over to Github Actions in a private repo and it's category "game changer" in terms of cutting down the runtime (~10k tests => went down from initial 2 suites ~15 mins to 6 slices done in ~4-5 mins).
I already tried to come up with something better but ran into all the problems you probably did too and would have ended up copying the file over 🤷♀️
I lack the knowledge of phpunit internals (and a bit time) to dig into that. But it seems phpunit is on a pace to seal many of it's openness in order to reduce complexity and too-creative-use on their side. Likely due to all the petty support issues to have to deal with (just a speculation, reading their issues for a while gives me the impression a bit).
Couldn't upgrade to phpunit9 due to version constraint in composer.json
Have not yet found time (will try later) but anyone already knowing if it "just works" or not?
I got it working but I was basically doing the same copypaste galore you already did 😄
It becomes pretty clear that any package trying to do this has to be creative because the way phpunit keeps changing currently you can't simply have a single code base support all those versions.
Either you jump through hoops with PHPunit version checks or class_exists stuff for a single code base or the package just keeps dedicated major releases mirroring the respective PHPUnit release.
Of course only now I realized that @spawnia was already working on this in https://github.com/mll-lab/phpunit-slicer/commits/phpunit-9 but not PR
Not sure who runs the repo, last commits are from @wizacedric and @philippe-vandermoere : are you still interesting in keeping the repo, any thoughts on the matter?
@mfn quick heads up, the PHPUnit 9 branch only works with version 9.2.5 (potentially lower, have not tried that). 9.2.6 breaks things again. I locked the requirement to make sure https://github.com/mll-lab/phpunit-slicer/commits/phpunit-9 keeps working for now.
@spawnia thanks!
I did not notice anymore as I assimilated the slicer in a private code repo already with 9.2.6 . Call that luck 😅
So we would end up continuously having to copy/paste adapt the two classes…
When I assimilated the code (I need to move fast here) it turns out it's not THAT bad in this bad situation:
copy over command
copy over testsuite
in both apply minimal changes to allow extending it (i.e. g remove final, make methods protected instead of private and remove certain return signatures
keep the most relevant business logic still inside separate files
So every time the release a new version it's the same over and over again…
I can see myself patching this for a company driven project but not maintaining in an OSS project when other people start to depend on this, as I'll just schedule the upgrades when I've really time for this.
Damn :)
@spawnia @mfn Thanks for both your inputs. I don't think we can accept the PR as it is. It would create a hard link between a version of phpunit/phpunit and a version of wizaplace/phpunit-slicer, which would be a problem for most users. We would need to have as many versions of wizaplace/phpunit-slicer as there are versions of phpunit/phpunit...
Can https://phpunit.readthedocs.io/en/9.2/extending-phpunit.html#extending-the-testrunner be a way to move forward on this?
@wizacedric PHPUnit brought us into this mess by locking down their implementation, adding final and private all over the place. At the same time, they did not offer extension mechanisms that are quite flexible enough to achieve what is needed.
Can https://phpunit.readthedocs.io/en/9.2/extending-phpunit.html#extending-the-testrunner be a way to move forward on this?
Maybe using BeforeTestHook and skipping all tests that are not in the current slice? We could probably hack around and (ab-)use static properties to pass the CLI arguments and count the tests.
According to https://github.com/sebastianbergmann/phpunit/issues/4121, PHPUnit would most likely add a solution that allows extensions to control PHPUnit test selection through an XML export/import.
It would create a hard link between a version of phpunit/phpunit and a version of wizaplace/phpunit-slicer, which would be a problem for most users.
Agree it is problematic, still better than no solution at all.
We would need to have as many versions of wizaplace/phpunit-slicer as there are versions of phpunit/phpunit...
Not totally true, we would have to cover more specific ranges, down to the minor version.
Maybe using BeforeTestHook
I gave it a cursory look so I might be wrong, but the hook doesn't even receive the test lest can return a value or signal in any way how to proceed or not.
Probably would require calling \PHPUnit\Framework\Assert::markTestSkipped if we could even figure out we sliced a particular test, but since the context is basically "string name of test", I doubt that.
The whole extending phpunit chapter, whilst looking nice from above (hooks everywhere), since you basically can't "control" anything isn't really useful for us.
I could see a hacky way using global state somewhere, calculating the slice based on counting the test executed (hopefully tests are always run in the same order…) and throw a SkippedTestError.
I've a >10k test suite, if I make 5 slices this means I'll 40k times throw this error to skip the tests… nah, I don't think I'll even attempt this.
Want to see a "funny" coincidence?
me, want to figure out how properly extend via createRunner
=> https://stackoverflow.com/questions/63208917/how-to-extend-phpunit-textui-commandcreaterunner-in-recent-versions-of-phpuni
"Remove Command::createRunner()"
I can hardly believe (see the timestamps involved) this is a coincidence.
Might as well join efforts.
Agree, I made https://github.com/wizaplace/phpunit-slicer/pull/9 to show the intent I had in mind.
I've yet to look further but I wonder if the concept of test filters in phpunit can be better (ab)used to "slice", see https://github.com/sebastianbergmann/phpunit/blob/ff047828b43b7ba88300372fb41943ddceb2db03/src/Runner/Filter/Factory.php#L50-L60 🤷♀️
How about we give in and try to get something like https://github.com/sebastianbergmann/phpunit/issues/3387 implemented in PHPUnit?
Yeah I remember it was mentioned in https://github.com/sebastianbergmann/phpunit/issues/4121#issuecomment-589594578 ;)
Btw. this issue links to https://github.com/sebastianbergmann/phpunit/pull/3605 which I wasn't really aware of
And I thought "aha, 'phpunit chunk'" and voila https://www.google.com/search?hl=en&q=phpunit chunk => https://github.com/jwage/phpchunkit
But basic installation fails, all dependencies are so outdated I can't install it with L7/PHPUnit9
Not sure if @jwage is still active in this area?
How about we give in and try to get something like sebastianbergmann/phpunit#3387 implemented in PHPUnit?
After having to go through the pain recently again of adapting sources for PHPUnit 9.3, I finally gave it a stab at
I'm happy (…) to report that the copypaste approach still works with PHPUnit 9.5 :}
I tried https://github.com/wizaplace/phpunit-slicer/pull/10 but I couldn't get it working, left comments there.
Only now I saw I got feedback in my phpunit PR which I was not ware of, will look at this ASAP too.
Since the signs are not good that the copypaste approach will continue to work as phpunit is changing their internals faster then I change my underwear and https://github.com/sebastianbergmann/phpunit/pull/4449 also doesn't show any movement, I devised a new strategy.
I don't think I'm the first to come up with this but TBH I have not seen this solution somewhere else:
create list of the tests in XML format from phpunit:
phpunit --list-tests-xml all_tests.xml
Use a script to "splice" this XML into smaller fragments, based on the idea phpunit-slicer -> phpunit_xml_slicer.php
phpunit_xml_slicer.php all_tests.xml 2/10 > slice_2_10.xml
Use yet another script phpunit_xml_class_to_file.php which takes the sliced XML and:
builds map of all the test classes
using the composer.json matches them for their PSR-4 namespace and uses this to convert them to files
replaces any existing testsuites purely with a single suite with all the files from the references test classes from the sliced XML
phpunit_xml_class_to_file.php composer.json phpunit.xml.dist slice_2_10.xml > phpunit-ci.xml
Use the phpunit-ci.xml to run phpunit in CI, which now only contains the <file>s from the sliced XML:
phpunit --configuration phpunit-ci.xml
This sounds involved, but it's just a few lines added to e.g. Github Action step and presto, you can get almost the same benefit as from phpunit-slicer, except now it's not depending on any PHPUnit internals anymore; pseudo example:
jobs:
phpunit:
strategy:
matrix:
phpunit-slices: ['1/6', '2/6', '3/6', '4/6', '5/6', '6/6']
steps:
# … other steps before
- name: 'phpunit: export all tests as XML'
run: vendor/bin/phpunit --list-tests-xml all_tests.xml
- name: 'phpunit: slice tests ${{ matrix.phpunit-slices }}'
run: phpunit_xml_slicer.php all_tests.xml ${{ matrix.phpunit-slices }} > slice.xml
- name: 'phpunit: convert classes to files and inject them back into phpunit XML config'
run: phpunit_xml_class_to_file.php composer.json phpunit.xml.dist slice.xml > phpunit-ci.xml
- run: vendor/bin/phpunit --configuration phpunit-ci.xml
Biggest difference
The approach used by phpunit-slicer can operation on the "test method" level, i.e. very fine grained.
(Not sure how it handles @dataprovider)
The approach outlined above operates on the "test class" level, i.e. it's much more coarse
I had concerns that this would create an imbalance of my test suite I'm testing this with (~15k tests) so that some slices would run much longer than others, but turns out in practice the difference was not noticeable. Might me a lucky case for me though, YMMV.
Why?!
I was exploring https://laravel.com/docs/8.x/testing#running-tests-in-parallel the other day and I could not really use phpunit-slicer for this, ran into all sorts of issues and finally took another stab at it.
What about the phpunit PR you mentioned?
https://github.com/sebastianbergmann/phpunit/pull/4449
Not sure when this progresses, but this would be the ideal case for a solution her:
it would be as fine grained as phpunit-slicer is, i.e. operate on the "individual test" level (including @dataprovider)
paratest (the underlying tool for Laravels parallel testing) signaled they would support above PR too => https://github.com/paratestphp/paratest/issues/556#issuecomment-741632177
which means I would expect adding it to Laravel would be at least technically possible too
Let's see \o/
I spent already way more on these kind of topics I ever thought I would, happy to hear about other perspectives / solutions / approaches 😄
@wizacedric PHPUnit brought us into this mess by locking down their implementation, adding final and private all over the place. At the same time, they did not offer extension mechanisms that are quite flexible enough to achieve what is needed.
Can https://phpunit.readthedocs.io/en/9.2/extending-phpunit.html#extending-the-testrunner be a way to move forward on this?
Maybe using BeforeTestHook and skipping all tests that are not in the current slice? We could probably hack around and (ab-)use static properties to pass the CLI arguments and count the tests.
According to sebastianbergmann/phpunit#4121, PHPUnit would most likely add a solution that allows extensions to control PHPUnit test selection through an XML export/import.
It would create a hard link between a version of phpunit/phpunit and a version of wizaplace/phpunit-slicer, which would be a problem for most users.
Agree it is problematic, still better than no solution at all.
We would need to have as many versions of wizaplace/phpunit-slicer as there are versions of phpunit/phpunit...
Not totally true, we would have to cover more specific ranges, down to the minor version.
Looks like @mfn and me maintain something like this on the side anyways. It is a bit ugly, but not too much of a hassle. Might as well join efforts.
@spawnia it's PHP, and its open source.
why not we forking then simply remove the final keyword and replacing the private into protected ?
it seem that's the only way in PHP, because PHP don't support overloading.
I'm facing the same problem as yours. Long run time and in-consistent test results, make me nuts.
It's OK when I run manually by group and filter, but found tons of error and fails when run all at once.
for the time being,
I will patch the code into phpunit directly
|
2025-04-01T06:40:56.544378
| 2023-08-11T10:44:34
|
1846605595
|
{
"authors": [
"darosior",
"edouardparis"
],
"license": "BSD-3-Clause",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:11953",
"repo": "wizardsardine/liana",
"url": "https://github.com/wizardsardine/liana/pull/606"
}
|
gharchive/pull-request
|
gui: installer: convey registering on a signing device isn't always required
As discussed in #545, in some cases we nudge the user to register the descriptor on their signing device although they might not have one. Those cases only ever arise when importing a descriptor (either when recovering from backup or participating in the creation of a descriptor on another laptop), since when the descriptor is created beforehand we can simply detect whether a signing device was used and thereby needs to be registered (implemented since https://github.com/wizardsardine/liana/pull/470).
Therefore, detect when the registration step arises as part of an import process and if so adjust the language to convey registration on a signing device may not be necessary.
Result:
Fixes #545.
In the future we could ask them beforehand whether they'll be using a signing device and only show them this step if so. But until we do that it's a minimal patch for the current misleading behaviour.
When importing the descriptor, user needs to store again the ledger HMAC. I am ok to make the step more easy to skip for the import descriptor process, but user with ledger will have to do it.
I don't understand your comment in the context of this PR?
|
2025-04-01T06:40:56.563769
| 2020-09-17T12:04:25
|
703526292
|
{
"authors": [
"leamas",
"wjwwood"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:11954",
"repo": "wjwwood/serial",
"url": "https://github.com/wjwwood/serial/issues/229"
}
|
gharchive/issue
|
More packaging: v8stdint.h clash
The v8stdint.h header is used in multiple packages, and creates collisions in large namespaces like Debian. Furhermore, this file is not really needed on any linux systen, since these have stdint.h available.
Enclosed patch drops v8stdint.h from package on hosts having stdint.h available, thus resolving this issue.
0001-Avoid-using-v8stdint.h-unless-needed.patch.gz
Please consider a pull request instead of a patch file: https://docs.github.com/en/github/collaborating-with-issues-and-pull-requests/creating-a-pull-request
Also, v8stdint.h is used conditionally on Windows only, so it should never cause an issue with Debian:
https://github.com/wjwwood/serial/blob/cbcca7c83745fedd75afb7a0a27ee5c4112435c2/include/serial/v8stdint.h#L39
I'll close this since I cannot take a patch file (no git author, etc).
The problem is not the usage, the problem is the very existence of this file which typically goes int /usr/include, creating collisions with other packages.
The patch is a git patch, you can use git -am which brings you author, date etc. I was admittedly lazy, the debian packaging lives on gitlab...
I see, given this is a very background project for me, I still don’t see me taking it without a pull request. But the patch is useful for others that stumble here. Thanks!
|
2025-04-01T06:40:56.609417
| 2016-10-14T09:39:02
|
183005990
|
{
"authors": [
"marqh"
],
"license": "apache-2.0",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:11955",
"repo": "wmo-registers/codes-wmo-deploy",
"url": "https://github.com/wmo-registers/codes-wmo-deploy/issues/49"
}
|
gharchive/issue
|
finding entries in WMO code tables
finding different ways to search
scientific domain experts: operational meteorologist
(in charge of a particular kind of data collection)
lets say they are interested in
http://test.wmocodes.info/bufr4/b/21
weather radar
but they don't know where the WMO keep this
in particular let us consider peakiness
http://test.wmocodes.info/bufr4/b/21/071
http://test.wmocodes.info/bufr4/b/21/093
http://test.wmocodes.info/bufr4/b/21/094
http://test.wmocodes.info/bufr4/b/21/182
lets just search
http://test.wmocodes.info/ui/text-search?query=peakiness
how may i narrow my search to the scenario i am interested in
'search' for table b
http://test.wmocodes.info/ui/text-search?query=table+b
find BUFR4 table B in the list and select
this takes us to:
http://test.wmocodes.info/bufr4/_b
consider velocity, as this has a wider path at the start so it's more useful to show context search
consider adding a link to the 'narrowing your search' page to the search results template
http://test.wmocodes.info/ui/about/findingentries
|
2025-04-01T06:40:56.622386
| 2022-07-25T11:30:23
|
1316692382
|
{
"authors": [
"fzgregor",
"woelper"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:11956",
"repo": "woelper/dircpy",
"url": "https://github.com/woelper/dircpy/pull/4"
}
|
gharchive/pull-request
|
Support symbolic links
Hi @woelper,
thank you for providing dircpy! I notice it ignores symbolic links at the moment. Only files and directories are handled.
I implemented the necessary functionality. Note that I had to use path.symlink_metadata().is_ok() to check if a file exists, as path.exists() follows symbolic links. That is, if the symbolic link was dangling, it would return false and the link wouldn't be copied. Also, metadata() would follow symbolic links, which is not what we want here.
Nice, thanks! Wow, I would have never guessed that symlink_metadata() Queries the metadata about a file without following symlinks. I'll merge this PR, but I think I might add a test before I release. If you want to write one, feel free to do so!
Does this #5 work for you?
Thanks, yes. Merged #5 . Nice work!
I like the functionality as it is now as default - it basically recreates a folder 1:1.
As for ergonomics, do you think it makes sense to add options to skip symlinks or dereference them? Or is this something people will not need?
Crates.io version has been released.
Crates.io version has been released.
Nice! Thanks
As for ergonomics, do you think it makes sense to add options to skip symlinks or dereference them? Or is this something people will not need?
I was thinking about this as well. Typically, symbolic links convey some semantics. E.g., one might use a symlink to ensure the same content is found at various file paths. And wants to keep this invariant, even after some program modifies the file via one of the paths. Or to save space referencing a huge file from multiple paths.
There is the exception though of symbolic links that become dangling in the destination because they don't reference a path within the copied directory...
I, personally, don't see the use for skipping symbolic links, dereferencing during copy might have its use though.
|
2025-04-01T06:40:56.681091
| 2018-04-02T14:51:38
|
310505958
|
{
"authors": [
"hideokamoto"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:11957",
"repo": "wonism/react-google-ads",
"url": "https://github.com/wonism/react-google-ads/pull/1"
}
|
gharchive/pull-request
|
Fix readme.md's typo
In getting started section, I think the command should be react-google-ads.
I've try to npm i -S react-google-ades but not working.
$ npm i -S react-google-ades
npm ERR! code E404
npm ERR! 404 Not Found: react-google-ades@latest
npm ERR! /Users/dc_hideokamoto/.npm/_logs/2018-04-02T14_48_42_089Z-debug.log
Thanks for merging :)
|
2025-04-01T06:40:57.031910
| 2024-04-25T13:56:07
|
2263662712
|
{
"authors": [
"amadeo-workos"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:11958",
"repo": "workos/workos-kotlin",
"url": "https://github.com/workos/workos-kotlin/pull/216"
}
|
gharchive/pull-request
|
Fix typo in release script
Description
Documentation
Does this require changes to the WorkOS Docs? E.g. the API Reference or code snippets need updates.
[ ] Yes
If yes, link a related docs PR and add a docs maintainer as a reviewer. Their approval is required.
|
2025-04-01T06:40:57.035967
| 2022-09-28T03:04:41
|
1388649555
|
{
"authors": [
"pubuzhixing8"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:11959",
"repo": "worktile/slate-angular",
"url": "https://github.com/worktile/slate-angular/issues/216"
}
|
gharchive/issue
|
android input issue
Reconsider the input problem of the editor under android, any ideas are welcome.
keep composition input state for Sougou Keyboard #228
Hi @RaulPROP and @BitPhinix,
I have been trying to handle Android input issues for slate-angular. I tested several input cases in the richtext demo, inlines demo, and markdown demo using multiple input methods like Gboard, Sougou Keyboard, Baidu IME, and others. I also validated some issues mentioned in this PR: https://github.com/ianstormtaylor/slate/pull/4988. It has been working fine so far.
Could someone please help me verify whether there are any other problems? Thank you very much!
|
2025-04-01T06:40:57.051313
| 2015-02-09T10:58:01
|
57015806
|
{
"authors": [
"BeeZerk",
"avilaton",
"worldveil"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:11961",
"repo": "worldveil/dejavu",
"url": "https://github.com/worldveil/dejavu/issues/69"
}
|
gharchive/issue
|
Postgresql
Hey, maybe this is not an issue but i have a question about dejavu. How is it possible to use it with postgresql? I create a new database script and rewrite the sql statements. But since there is no hex or unhex function in postgresql it wont work. I use the encode/decode function but for example the decode function returns a longer string than the unhex in mysql. Is there any solution?
EDIT:
i found a way to create a postgresql script. Can be closed
@BeeZerk do you mean a database adapter? Would welcome a pgsql PR if you have one working.
@worldveil yeah i have one now. But my python knowledge isnt that good so maybe there are some bugs. But i can create fingerprints from a directory and recognize files. If you want i send it to you
The best way to do this would be to migrate the current adapter to sqlalchemy which is not very hard.
|
2025-04-01T06:40:57.070724
| 2020-12-22T12:30:14
|
772910787
|
{
"authors": [
"antaldaniel",
"rodrigoalcarazdelaosa"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:11963",
"repo": "wowchemy/wowchemy-hugo-modules",
"url": "https://github.com/wowchemy/wowchemy-hugo-modules/issues/2048"
}
|
gharchive/issue
|
Lanugage selector script does not take in new language
I am working on a multi-language website, and wanted to add a language that is currently not part of the pre-translated 34 ones, Slovak. I followed the instructions, created an sk.yaml in the i18n directory in root, give the menu parameters, and the website is built, but cannot be accesssed from the main language (English) interface.
For understanding what is happening, I created three languages, en, nl, and sk.
http://...url/sk/ shows the website, but it is not accessible from the English/Dutch interface language selector.
http://...url/nl/ not only shows the Dutch website, but the language selector works as expected.
http://...url/ shows the default English and allows the selection of the Dutch version (nl)
I think that there is a script that emunerates the languages, but I cannot locate it. Because the all other UI elements are correctly shown in Slovak. I think that the problem is that the string 'Slovak' needs to be inserted to a language list, and of course, needs to be translated in existing languages. For example, the Dutch version language list knows that English = Engels, but does not know how to call Slovak.
Creating menus
The new documentation (compared to earlier academic versions) suggests to create new toml files for each language. As you can see if you click through, this is not necessary, the Dutch menu comes up nicely when the menus.toml is configured with both English and Dutch.
I tried to configure here the Slovak language, too, and then I also followed the suggested route, i.e. the creation of a separate menu.sk.toml [This may require clarification in the documentation, but I think both approaches work fine.]
I believe that the problem is in the language selector script, becasue in fact the Slovak website is created properly, with Slovak title, and Slovak scripting.
Technical details:
Link to your GitHub project: https://github.com/dataobservatory-eu/listen-local-website/
Wowchemy Version (from your go.mod): go 1.15, github.com/wowchemy/wowchemy-hugo-modules/wowchemy v0.0.0-20201220004521-6c434e6de205
Hugo Version (run hugo version): 0.79.0
Browser/OS: Windows10
Wowchemy Template: module github.com/wowchemy/starter-academic
Extra:
Needless to say that as soon as I resolve this issue, I'll send a pull request with a new Slovak localization, and an update of Dutch, Estonian, Hungarian versions, as some recent improvements in the UI were not followed in the lanugage.yaml files.
You added your new language in your config/_default/languages.toml right?
|
2025-04-01T06:40:57.085272
| 2024-01-03T12:32:35
|
2063956544
|
{
"authors": [
"coveralls",
"woylie"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:11964",
"repo": "woylie/flop_phoenix",
"url": "https://github.com/woylie/flop_phoenix/pull/308"
}
|
gharchive/pull-request
|
replace Phoenix.HTML.Tag
contributes to #299
coverage: 100.0%. remained the same
when pulling 4ce52639d5a7d881fc260878d88ed03f2f4a0432 on remove-phoenix-html-tag-references
into ce208adb84b44c575f4e322d86396a98d5d80bbf on main.
|
2025-04-01T06:40:57.088073
| 2016-03-04T10:58:22
|
138444738
|
{
"authors": [
"danielbachhuber",
"gilbitron"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:11965",
"repo": "wp-cli/wp-cli.github.com",
"url": "https://github.com/wp-cli/wp-cli.github.com/pull/92"
}
|
gharchive/pull-request
|
Port Package Functional Tests page from wiki
From https://github.com/wp-cli/wp-cli/issues/2508
Handled this in #112
|
2025-04-01T06:40:57.156450
| 2024-04-15T14:05:28
|
2243781704
|
{
"authors": [
"ChrisWRWX",
"ebhills"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:11966",
"repo": "wrangleworks/WranglesPY",
"url": "https://github.com/wrangleworks/WranglesPY/issues/455"
}
|
gharchive/issue
|
Language Detection
Need a Wrangle to simply detect the langauge of texst, not translate it. I searched the DeepL API docs and it doesn't seem to do this directly. It doesd return the detected language, but you must specify the targe language.
I found an Azure API that does this explicitly. What do you think about giving it a try?
https://learn.microsoft.com/en-us/azure/ai-services/language-service/language-detection/how-to/call-api
The general use case is processing a file that has multiple languages. Specifically, as an input to PRA, I need the language in order to direct the search to the appropriate country / websites.
I'm hoping that we can do this w a Recipe just like we did for the Google Address validation.
Closing this. Achieved with extract.ai using a prompt.
|
2025-04-01T06:40:57.176512
| 2020-11-26T22:08:09
|
751845404
|
{
"authors": [
"ChayimFriedman2",
"PureFox48",
"benpop",
"clsource",
"mhermier",
"ruby0x1"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:11967",
"repo": "wren-lang/wren",
"url": "https://github.com/wren-lang/wren/issues/851"
}
|
gharchive/issue
|
[Feature Request] Raw Strings
I think that a simple raw string would be beneficial when using lots of texts with double-quotes and similar needs to escape characters.
I think using the backtick character (`) (similar to Javascript) could be an option to delimiter a raw string.
var raw = `This is a "Raw" String`
var raw2 = `
This can be done too %(not_interpolated)
`
Thanks 👍
I prefer Rust syntax because using this syntax there's no way to encode a backtick in the string.
Rust syntax:
r then zero to infinite #s then the string then the same number of #s as in the start.
For example:
r"I am a raw string an I can include \"
r#"I am an another raw string and I can include even " (and of course \ too)"#
r#####"I am a third raw strings and I can include " then up to 4 #s - see: "#### "#####
Syntax:
RAW_STRING_LITERAL :
r RAW_STRING_CONTENT
RAW_STRING_CONTENT :
" <text> "
| # RAW_STRING_CONTENT #
(From The Rust Reference).
You are right that a single backtick character would need to be escaped too.
But that kind of syntax is in my humble opinion, not much Wren like.
If something can be to pass variables that can be done with replace.
r"My not interpolated "string" with {param}".toString.replace("{param}", 42)
If something can be to pass variables that can be done with replace.
That's true but not relevant.
r"My not interpolated "string" with {param}".toString.replace("{param}", 42)
Your string isn't valid because you have " inside.
how about using python's """ for delimiting a raw string?
Then you cannot encode """. Yes, this is rarer, but still.
Ok, how about using the current string method. But just adding a new escape character %r(). Then the interpolation logic would detect that and convert the contents to codepoints until the end of the parenthesis, later will interpolate the string and finally convert %r() back from codepoints to a string.
var raw = "
%r("This is a raw string")
"
First, that can collide with format specifiers that we may have in the future.
Second, it does not solve the problem of escaping the end marker.
Third, I find it strange and unnatural.
Maybe it can't be helped but for raw strings, a new keyword must be provided. Unless there is some syntax sugar that could be added to convert to raw codepoints before interpolation. I don't know xd.
Here is a discussion of raw strings in Swift. https://github.com/apple/swift-evolution/blob/master/proposals/0200-raw-string-escaping.md
I'm not arguing that raw strings are not useful, I'm just saying that about the proposed syntax.
C# raw strings are pretty good. I love C#, but in this case, I think that Rust and C++ approaches are better. Remember that raw strings are often (although not always) used in the context of long strings, and in those places, even double quotes instead of one can confuse and break the read continuity. For example, how is reading this:
@"
<!DOCTYPE html>
<html lang=""en"">
<head>
<title>My Title</title>
<head>
<h1 style=""font-family: Arial;"">Header</h1>
<script src=""script.js""></script>
</html>
"
Versus reading this:
r#"
<!DOCTYPE html>
<html lang="en">
<head>
<title>My Title</title>
<head>
<h1 style="font-family: Arial;">Header</h1>
<script src="script.js"></script>
</html>
"#
When I read the first example my eyes jump here and there because of the extra quotes. The second example is clear and easy to read. I also had in some project a list of strings that were aligned, and the quotes broke the alignment and make everything ugly.
A simple solution would be to extend regular string to allow newlines to be
included. It has the benefit to allow raw like string at the cost of
requiring to escape the escape char.
Be careful that raw lines also open the line termination character, that is
OS dependent.
Considering last point and that it is an option that one hardly use, all in
all I don't find it really worth it to add.
Nevertheless, I find at least it should be addressed the escape double-quotes use case. It's a common character, especially in JSON and HTML documents. The "fat quotes" (""") can be provided to automatically:
1: Enable using double quotes without escaping
2: Remove incidental white-space
var json = """
{
"my": "json"
}
"""
I'd like to see raw strings added to Wren too.
Personally, I dislike the way C++ and Rust do it. Although they can cope with just about anything it's just too complicated for my taste.
The Go language uses back-ticks to delineate raw strings but a lot of people aren't happy that you can't then include back-ticks themselves in the raw string.
The best solution in my view is to use triple quotes. A number of languages have adopted this as a reasonable solution. You can't then include triple quotes in the raw string but how often does that come up? - not very often I suspect.
A simple solution would be to extend regular string to allow newlines to be included. It has the benefit to allow raw like string at the cost of requiring to escape the escape char.
What wasn't clear when I said:
Also I think it is important to say here that Wren supports multiline string literals (for some reason it's not stated in the docs).
You can't then include triple quotes in the raw string but how often does that come up? - not very often I suspect.
When you include a code in the language that includes raw string. That happened to me.
Single quotes aren't currently used by Wren, couldn't we use a single-quoted string as a raw string?
If we did, then we wouldn't be able to include single-quotes in the raw string and they're commoner in my experience than back-ticks and certainly much more common than triple double-quotes.
Other languages like Elixir uses the concept of sigils ~s()and ~S()
https://elixir-lang.org/getting-started/sigils.html#strings
~s(String with escape codes \x26 #{"inter" <> "polation"})
"String with escape codes & interpolation"
~S(String without escape codes \x26 without #{interpolation})
"String without escape codes \\x26 without \#{interpolation}"
I think this has been solved by @ruby0x1 https://github.com/wren-lang/wren/commit/8304fd5ecc8b367f5ece477fd0808f1f0c33f7e8 👍 💯
I'm super happy about it. Thanks Ruby!
I forgot to comment here but I was working on it and it was ready.
Here's the documentation: https://wren.io/values.html#raw-strings
What a pleasant surprise :)
I've been thinking about this for some time and was convinced that triple double-quotes were the best solution for a simple language such as Wren. The ignoring of white-space when the triple quotes are on their own line is also a nice touch.
If your raw string does happen to include code in Wren itself or in a language which also uses triple quotes for raw strings, then this will normally be easy to deal with by temporarily replacing the triple double-quotes with single quotes, viz:
"""
A markdown string with embedded wren code example.
class Example {
construct code() {
var s = '''I'm an embedded raw string!'''
}
}
""".replace("'''", "\"\"\"")
With this feature in place maybe it is a good time to revisit unit testing?
@PureFox48 yea, replace works and you can also do combining
"""
some long parts"""
+ "\"\"\"" +
"""
the rest
"""
|
2025-04-01T06:40:57.190935
| 2017-05-22T04:17:34
|
230280904
|
{
"authors": [
"wrouesnel"
],
"license": "apache-2.0",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:11968",
"repo": "wrouesnel/tail_exporter",
"url": "https://github.com/wrouesnel/tail_exporter/issues/2"
}
|
gharchive/issue
|
Add configurable timeout for metric labels.
When using highly variable metrics we'd like to bound memory use a bit by not persisting metrics which have stopped being sent to the underlying logger.
Default behaviour should be "forever" - practical behaviour for example, in the specific case of logging DNS requests/responses should let us configure an "age-out" period.
Closed by #3
|
2025-04-01T06:40:57.210292
| 2023-05-12T04:44:05
|
1706915214
|
{
"authors": [
"SavinduDimal",
"YasasRangika"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:11969",
"repo": "wso2/api-manager",
"url": "https://github.com/wso2/api-manager/issues/1817"
}
|
gharchive/issue
|
[APIM][2.5.0] Invalid code error while optimising for profiles
Description
While testing the Key Manager profile following issue was identified.
Following logs were observed while optimising for the key manager profile.
bin % sh profileSetup.sh -Dprofile=api-key-manager
Starting to optimize API Manager for the Key Manager profile
sed: 1: "../repository/conf/api- ...": invalid command code .
[2023-05-11 16:18:24:N] INFO - Disabled the <DataPublisher> from api-manager.xml file
sed: 1: "../repository/conf/api- ...": invalid command code .
[2023-05-11 16:18:24:N] INFO - Disabled the <JMSConnectionDetails> from api-manager.xml file
sed: 1: "../repository/conf/api- ...": invalid command code .
[2023-05-11 16:18:24:N] INFO - Disabled the <PolicyDeployer> from api-manager.xml file
sed: 1: "../repository/conf/axis ...": invalid command code .
[2023-05-11 16:18:24:N] INFO - Disabled the <transportSender name="ws" class="org.wso2.carbon.websocket.transport.WebsocketTransportSender"> from axis2.xml file
sed: 1: "../repository/conf/axis ...": invalid command code .
[2023-05-11 16:18:24:N] INFO - Disabled the <transportSender name="wss" class="org.wso2.carbon.websocket.transport.WebsocketTransportSender"> from axis2.xml file
Due to those commands were not executed several errors were generated when trying to start the key manager.
Similar issue was observed for other profiles as well
Product : wso2am-2.5.0
Update level : 68
Steps to Reproduce
Get a U2 updated wso2am-2.5.0 pack.
Run the <PRODUCT_HOME>/bin/profileSetup.sh script using sh <PRODUCT_HOME>/bin/profileSetup.sh -Dprofile=api-key-manager
Affected Component
APIM
Version
2.5.0
Environment Details (with versions)
macOS (Apple M1 Pro)
Relevant Log Output
No response
Related Issues
No response
Suggested Labels
No response
closing this issue as it has been resolved internally.
|
2025-04-01T06:40:57.227172
| 2022-06-17T07:12:16
|
1274660297
|
{
"authors": [
"AnuGayan",
"vinhphamduc"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:11970",
"repo": "wso2/api-manager",
"url": "https://github.com/wso2/api-manager/issues/204"
}
|
gharchive/issue
|
I have tested above mentioned version on an Apple M1 Pro chip with OpenJDK Runtime Environment Temurin-<IP_ADDRESS>+1 and couldn't observe the above mentioned issue
|
2025-04-01T06:40:57.232064
| 2024-02-21T09:10:29
|
2146192566
|
{
"authors": [
"Avishka-Shamendra",
"shavinsenadheera"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:11971",
"repo": "wso2/api-manager",
"url": "https://github.com/wso2/api-manager/issues/2500"
}
|
gharchive/issue
|
Maintaining minor OpenAPI document version
Description
APIM 3.2.0 U2 level 332 onwards, we can see the following behavior.
Go to the Publisher Portal and browse to the API definition.
Change the "info.version" property value to "1.0.1" (Assuming the initial value is "1.0.0").
Update the API.
The "info.version" property value reverts back to the "1.0.0" (API's version)
Therefore, cannot maintain the minor version on the OpenAPI document.
Steps to Reproduce
Mentioned in the description.
Affected Component
APIM
Version
3.2.0
Environment Details (with versions)
No response
Relevant Log Output
No response
Related Issues
No response
Suggested Labels
No response
Hi all,
As per discussions within the team, this is not a recommended/valid approach. In 4.x.x versions, the API version in the OAS file should align with the version of the managed API. Hence, we will be closing this issue and we won't send a fix to master.
Thank you
Avishka
|
2025-04-01T06:40:57.236227
| 2024-03-28T12:42:02
|
2213148101
|
{
"authors": [
"CLAassistant",
"piyumaldk"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:11972",
"repo": "wso2/apim-apps",
"url": "https://github.com/wso2/apim-apps/pull/656"
}
|
gharchive/pull-request
|
Fixed UI Styling Issues in API Security Audit Report Viewing Page
Fixes https://github.com/wso2/api-manager/issues/2570
New UI
Thank you for your submission! We really appreciate it. Like many open source projects, we ask that you sign our Contributor License Agreement before we can accept your contribution.You have signed the CLA already but the status is still pending? Let us recheck it.
|
2025-04-01T06:40:57.263304
| 2020-02-21T09:52:37
|
568850201
|
{
"authors": [
"CLAassistant",
"claassistantio",
"dilee"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:11973",
"repo": "wso2/devstudio-tooling-dss",
"url": "https://github.com/wso2/devstudio-tooling-dss/pull/99"
}
|
gharchive/pull-request
|
Refactor DSS editor webapp
Purpose
Refactoring DSS editor webapp
Thank you for your submission! We really appreciate it. Like many open source projects, we ask that you sign our Contributor License Agreement before we can accept your contribution.You have signed the CLA already but the status is still pending? Let us recheck it.
All committers have signed the CLA.
|
2025-04-01T06:40:57.264480
| 2021-06-21T07:37:07
|
925921298
|
{
"authors": [
"Nashaath"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:11974",
"repo": "wso2/docs-choreo-dev",
"url": "https://github.com/wso2/docs-choreo-dev/pull/175"
}
|
gharchive/pull-request
|
Sync Dev with Main
Purpose
Sync Dev with Main to add Slack and Choreo logo to Config Files
Closing since the changes are already pushed to main.
|
2025-04-01T06:40:57.286697
| 2021-07-20T05:34:51
|
948273727
|
{
"authors": [
"BuddhikaJayanarth",
"wasuradananjith"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:11976",
"repo": "wso2/product-apim",
"url": "https://github.com/wso2/product-apim/issues/11448"
}
|
gharchive/issue
|
Cannot generate a REST API with a WSDL file on Windows
Description:
Windows specific bug regarding generating a REST API with a WSDL file.
Steps to reproduce:
On Windows OS:
Go to publisher portal -> "I have a SOAP endpoint"
Implementation type = Generate REST APIs
Input type = WSDL File/Archive
Choose a WSDL file (not a zip file)
Try to create API on next page and it will fail
Affected Product Version:
APIM 3.2.0
Environment details (with versions):
OS: Windows 10
Optional Fields
Related Issues:
Suggested Labels:
Suggested Assignees:
The fix for the master has been already done via https://github.com/wso2/carbon-apimgt/commit/5329a140308dc3c7ba99ed837a8d90e98738bf1f and https://github.com/wso2/carbon-apimgt/pull/10686. Hence closing the issue.
|
2025-04-01T06:40:57.288108
| 2019-10-25T11:30:41
|
512463952
|
{
"authors": [
"kavishkafernando",
"msm1992"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:11977",
"repo": "wso2/product-apim",
"url": "https://github.com/wso2/product-apim/issues/6694"
}
|
gharchive/issue
|
Scopes can be added with non existing roles
A role validation doesn't happen to check whether the provided role is available in the environment, when adding a new scope
Affected version : APIM 3.0 rc3
working in latest 3.1.0 pack.
|
2025-04-01T06:40:57.290678
| 2015-02-27T04:34:54
|
59188333
|
{
"authors": [
"kasunbg",
"sanethd"
],
"license": "apache-2.0",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:11978",
"repo": "wso2/product-as",
"url": "https://github.com/wso2/product-as/pull/48"
}
|
gharchive/pull-request
|
Implementation of Ghost deployment(Lazy Loading) test cases for WSO2 AS (JIRI TA-591)
Implementation of Ghost deployment(Lazy Loading) test cases for WSO2 AS (JIRI TA-591)
this includes
Ghost deployment (Lazy Loading) test caes for WSO2 AS.
Rest web service artefact to retrieve the web-app/tenant lazy loading information.
add new "tests-common" module in between "modules/integration" and other common sub modules like "tests-common/admin-client","tests-common/ui-pages","tests-common/integration-test-utils".
Hi Saneth,
Can you let me know once you are done with above changes? I will mege this right away.
Thanks.
@kasunbg this can be merge now... I have done all changes.
|
2025-04-01T06:40:57.300214
| 2019-06-09T14:45:02
|
453904763
|
{
"authors": [
"IsurangaPerera",
"isharak"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:11979",
"repo": "wso2/product-is",
"url": "https://github.com/wso2/product-is/issues/5519"
}
|
gharchive/issue
|
Sort the Certificate Alias list by alphabetical order in SAML config page
Moved from: https://wso2.org/jira/browse/IDENTITY-7112
If you have a long list of Certificate Aliases it's difficult to find it since the list is not ordered. Can we sort the list Alphabetically?
This issue is being closed due to extended inactivity. Please feel free to reopen it if further attention is needed. Thank you for helping us keep the issue list relevant and focused!
|
2025-04-01T06:40:57.304013
| 2020-08-13T06:43:03
|
678194895
|
{
"authors": [
"gayashanbc",
"isharak"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:11980",
"repo": "wso2/product-is",
"url": "https://github.com/wso2/product-is/issues/9201"
}
|
gharchive/issue
|
[SCIM2] Get Group call fails when the user list of the role is large
Description
Get Group call fails (timeout) when the user list of the role is large.
Following error can be also observed in the carbon log in IS 5.9.0.
[2020-08-13 11:36:29,788] [] WARN {org.wso2.carbon.tomcat.ext.valves.CarbonStuckThreadDetectionValve} - Thread [https-jsse-nio-9443-exec-4] (id=[{6}]) has been active for [600,906] milliseconds (since [8/13/20 11:26 AM]) to serve the same request for [https://localhost:9443/scim2/Groups/9cb38827-ede0-43c6-9568-98d29071da66] and may be stuck (configured threshold for this StuckThreadDetectionValve is [600] seconds). There is/are [1] thread(s) in total that are monitored by this Valve and may be stuck., tenantDomain=carbon.super java.lang.Throwable
Root Cause
The reason is that after the usernames are taken from carbon kernel via getUserListOfRole(roleName) API, at SCIM level it calls getUserClaimValue for each username to get the respective userId.
Steps to Reproduce
Assign over 1000 users to a role
Set the MaxUserLimit property to 1000
Call the SCIM2 getGroup API for the respective groupId
Reproduced IS Versions
5.7
5.9
Reproduced User Store
Active Directory
This issue is being closed due to extended inactivity. Please feel free to reopen it if further attention is needed. Thank you for helping us keep the issue list relevant and focused!
|
2025-04-01T06:40:57.307688
| 2021-10-29T19:26:01
|
1039866013
|
{
"authors": [
"CLAassistant",
"jenkins-is-staging"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:11981",
"repo": "wso2/product-is",
"url": "https://github.com/wso2/product-is/pull/12731"
}
|
gharchive/pull-request
|
Bump Dependencies #1399539951
Bumps dependencies for product-is. Link : https://github.com/wso2/product-is/actions/runs/1399539951
Thank you for your submission! We really appreciate it. Like many open source projects, we ask that you sign our Contributor License Agreement before we can accept your contribution.jenkins-is-staging seems not to be a GitHub user. You need a GitHub account to be able to sign the CLA. If you have already a GitHub account, please add the email address used for this commit to your account.You have signed the CLA already but the status is still pending? Let us recheck it.
|
2025-04-01T06:40:57.311013
| 2023-10-15T17:31:21
|
1943981225
|
{
"authors": [
"CLAassistant",
"jenkins-is-staging"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:11982",
"repo": "wso2/product-is",
"url": "https://github.com/wso2/product-is/pull/16972"
}
|
gharchive/pull-request
|
Bump Dependencies #6524807879
Bumps dependencies for product-is. Link : https://github.com/wso2/product-is/actions/runs/6524807879
Thank you for your submission! We really appreciate it. Like many open source projects, we ask that you sign our Contributor License Agreement before we can accept your contribution.jenkins-is-staging seems not to be a GitHub user. You need a GitHub account to be able to sign the CLA. If you have already a GitHub account, please add the email address used for this commit to your account.You have signed the CLA already but the status is still pending? Let us recheck it.
|
2025-04-01T06:40:57.352937
| 2017-09-05T11:55:54
|
255251982
|
{
"authors": [
"ascrutae",
"coveralls"
],
"license": "apache-2.0",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:11983",
"repo": "wu-sheng/sky-walking",
"url": "https://github.com/wu-sheng/sky-walking/pull/398"
}
|
gharchive/pull-request
|
fix time bucket issue and fix issue that check style failed
@pengys5
Changes Unknown when pulling a4c0bb5db4368e29b4afb59ab045e468afb12a19 on ascrutae:feature/debugging into ** on wu-sheng:feature/debugging**.
Changes Unknown when pulling 6d46e85a6fe5cdc96f89ee7ef4d3eee6c42200c2 on ascrutae:feature/debugging into ** on wu-sheng:feature/debugging**.
|
2025-04-01T06:40:57.364505
| 2024-02-08T17:06:12
|
2125669467
|
{
"authors": [
"Denisalik",
"thedeadferryman"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:11984",
"repo": "wunderschild/graphql-java-tools",
"url": "https://github.com/wunderschild/graphql-java-tools/pull/1"
}
|
gharchive/pull-request
|
Add JavaTimeModule
Checklist
[ ] Pull requests follows the contribution guide
[ ] New or modified functionality is covered by tests
Description
This can be configured via the public API (see SchemaParserOptions). No need to patch the library
This can be configured via the public API (see SchemaParserOptions). No need to patch the library
|
2025-04-01T06:40:57.395800
| 2020-04-30T04:35:17
|
609566939
|
{
"authors": [
"hoangnm",
"msjh80311"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:11985",
"repo": "wwayne/react-tooltip",
"url": "https://github.com/wwayne/react-tooltip/issues/591"
}
|
gharchive/issue
|
Tooltip does not show on Safari-IOS browser when set the effect prop to solid
I tested on Safari-IOS, it won't show the tooltip if I set the effect prop to solid.
+1
Can anyone help with this?
How to reproduce
More than 1 tooltips on a page
On iPad/iPhone
Set the effect prop to solid
Click on the tips, everything looks fine at first.
Click on the tips again. The tooltip contents become invisible.
Expectation
Showing solid tooltips correctly every time when clicking on the tips.
More information
The first tooltip on a page behaves correctly.
Version
<EMAIL_ADDRESS>
|
2025-04-01T06:40:57.408241
| 2018-03-27T09:45:40
|
308896380
|
{
"authors": [
"DietmarSchwertberger",
"GadgetSteve"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:11986",
"repo": "wxGlade/wxGlade",
"url": "https://github.com/wxGlade/wxGlade/issues/312"
}
|
gharchive/issue
|
Bug Reporter "How to report" tab
The "how to report a bug" tab of the bug reporter references https://sourceforge.net/p/wxglade/bugs/ rather than https://github.com/wxGlade/wxGlade/issues which is the current location. I am not sure if the mailing list is also incorrect.
Glade version v0.8.0
Shows the issue.
Thanks. I missed this one as I usually run under a debugger and so never reach the dialog.
It's updated in the repository now.
Fixed with 0.8.1 (see https://github.com/wxGlade/wxGlade/releases )
|
2025-04-01T06:40:57.453383
| 2017-04-20T11:09:51
|
223031214
|
{
"authors": [
"bonzinho",
"wy-ei"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:11987",
"repo": "wy-ei/vue-filter",
"url": "https://github.com/wy-ei/vue-filter/issues/12"
}
|
gharchive/issue
|
[Vue warn]: Failed to resolve filter: substring(0,20) (found in component: )
Hello,
I do not know if I'm doing anything wrong, but when I use, for exemple, the substring (0.20) filter , I get this error: [Vue warn]: Failed to resolve filter: substring (0,20) (Found in component: )
can you help me?
thank you
@bonzinho The reason is you didn't install the filter correctly. Please see this: Install. If it can't work please let me know. :)
Hi again, thanks for quick response,
Sorry im noob with webpack,
and in my file spa.config i do this:
`require('materialize-css');
window.Vue = require('vue');
require('vue-resource');
require('vue-filter');
Vue.http.options.root = appConfig.api_url;
require('./router');`
var Vue = require('vue');
var VueFilter = require('vue-filter');
Vue.install(VueFilter); // don't forget install
Now, I think your problem must be solved. 😃
I had tried, but it shows me these errors:
[Filter duplication]: A filter named debouncehas has already been installed.
(Anonymous) @ vue-filter.js: 2775
Vue-filter.js: 2775 [filter duplication]: A filter named uppercasehas already been installed.
(Anonymous) @ vue-filter.js: 2775
Vue-filter.js: 2775 [filter duplication]: A filter named lowercasehas already been installed.
(Anonymous) @ vue-filter.js: 2775
Spa.js: 10 Uncaught TypeError: tete.install is not a function
At Object. (spa.js: 10)
At webpack_require (bootstrap efbb9be ...? 3158: 555)
At fn (bootstrap efbb9be ...? 3158: 86)
At Object. (bootstrap efbb9be ...? 3158: 578)
At webpack_require (bootstrap efbb9be ...? 3158: 555)
At bootstrap efbb9be ...? 3158: 578
At bootstrap efbb9be ...? 3158: 578
Thanks anyway
I had tried, but it shows me these errors:
[Filter duplication]: A filter named debouncehas has already been installed.
(Anonymous) @ vue-filter.js: 2775
Vue-filter.js: 2775 [filter duplication]: A filter named uppercasehas already been installed.
(Anonymous) @ vue-filter.js: 2775
Vue-filter.js: 2775 [filter duplication]: A filter named lowercasehas already been installed.
(Anonymous) @ vue-filter.js: 2775
Spa.js: 10 Uncaught TypeError: Vue.install is not a function
At Object. (spa.js: 10)
At webpack_require (bootstrap efbb9be ...? 3158: 555)
At fn (bootstrap efbb9be ...? 3158: 86)
At Object. (bootstrap efbb9be ...? 3158: 578)
At webpack_require (bootstrap efbb9be ...? 3158: 555)
At bootstrap efbb9be ...? 3158: 578
At bootstrap efbb9be ...? 3158: 578
Thanks anyway
@bonzinho I don't know why you got those error message.
Spa.js: 10 Uncaught TypeError: Vue.install is not a function
Vue.install this must be a function. I can check your webpack config. I guess the problem must in there.
|
2025-04-01T06:40:57.470925
| 2015-04-22T08:19:52
|
70055816
|
{
"authors": [
"angelozerr",
"x-cray"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:11988",
"repo": "x-cray/titanium-ternjs",
"url": "https://github.com/x-cray/titanium-ternjs/issues/1"
}
|
gharchive/issue
|
Generated titanium.js
It should be cool if you could host teh generated titanium.js in your git repository. Thanks!
Do you encounter any issues with running the generator script? I'm not really following Titanium releases now since I've switched from mobile development. Would you provide PR with generated Tern definitions?
|
2025-04-01T06:40:57.481277
| 2023-01-15T13:49:47
|
1533823771
|
{
"authors": [
"mrexodia"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:11989",
"repo": "x64dbg/DotX64Dbg",
"url": "https://github.com/x64dbg/DotX64Dbg/issues/72"
}
|
gharchive/issue
|
Hot reloading removes unchanged plugins from the list
On startup:
Then I modify the BreakpointLog plugin:
Fixed by #76
|
2025-04-01T06:40:57.635978
| 2019-02-05T08:51:32
|
406681060
|
{
"authors": [
"FlowingSPDG",
"xNWP"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:11990",
"repo": "xNWP/HLAELiveLink",
"url": "https://github.com/xNWP/HLAELiveLink/issues/6"
}
|
gharchive/issue
|
LiveLink stops when Ctrl+Z pressed
LiveLink stops when Pressing Ctrl+Z for Undo action on C4D,and sometimes C4D crashes by it.
Actually looks like UI is still working,but it can't send
C4D's bugreport file and screenshot(zip)
It needs to restart C4D to fix it.
Hi Flowing, unfortunately most of these bugs are caused by memory errors in the underlying library that HLAELiveLink uses. The only true fix would be to completely re-write a websocket library (which I have considered). But the rate at which crashes happen seem to be small (at least in my tests). If there is enough demand however I will look into re-writing a the websocket backend. In the meantime if you do find any more situations that immediately cause a crash feel free to leave them as a comment on this issue, this will help me further diagnose the underlying problem.
thanks <3 cheers!
also C4D => HLAE cam sync stoppes when C4D has keyframe on camera positions.
I'm not sure it's websocket related bug or not tho.
Are there keyframes on each frame or is just a few across the timeline? And does it stop only on the keyframes or just outright stop working completely?
I'm using that your "HLAE CamIO 2 C4D",So it has keyframes on every frames.
it stoppes completely when C4D cam has keyframes.
Hmmm interesting, I will definitely have to look into that then. Thanks for the info!
|
2025-04-01T06:40:57.637622
| 2023-11-22T13:49:35
|
2006388350
|
{
"authors": [
"ReggieReo",
"xNatthapol"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:11991",
"repo": "xNatthapol/SIP",
"url": "https://github.com/xNatthapol/SIP/issues/57"
}
|
gharchive/issue
|
Log In page return Server Error (500)
When trying to access the login page, the application returns a Server Error (500).
how to reproduce:
clicking the login button from any page.
Thank you for your feedback. We've already resolved it. The login is now working.
|
2025-04-01T06:40:57.798809
| 2021-01-25T18:56:58
|
793641826
|
{
"authors": [
"cmmarslender",
"svanharmelen"
],
"license": "apache-2.0",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:11992",
"repo": "xanzy/go-gitlab",
"url": "https://github.com/xanzy/go-gitlab/pull/1048"
}
|
gharchive/pull-request
|
Add operations_access_level flag to projects
@svanharmelen I've got another PR for you :)
This flag was added in the most recent release of GitLab
Looks good, but I think we should then also add the field to the create and edit structs. Right?
Looks good, but I think we should then also add the field to the create and edit structs. Right?
Looks good, but I think we should then also add the field to the create and edit structs. Right?
Yes, good call. Added that as well
Looks good, but I think we should then also add the field to the create and edit structs. Right?
Yes, good call. Added that as well
|
2025-04-01T06:40:57.859904
| 2015-07-19T16:02:59
|
95926819
|
{
"authors": [
"MartijnKaijser",
"SyncedSynapse",
"akshay2000",
"wp9015362"
],
"license": "apache-2.0",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:11993",
"repo": "xbmc/Kore",
"url": "https://github.com/xbmc/Kore/pull/100"
}
|
gharchive/pull-request
|
Added Vibration on Remote Arrow Press
Arrow buttons on remote give out 100ms vibrations on press. Setting to enable and disable the vibrations has been added too.
Fixes #95.
Thank you for having submitted a pull request for my feature request.
But:
@akshay2000 wrote:
Arrow buttons on remote give out 100ms vibrations on press.
Does your pull request only add vibration for the arrow buttons?
My feature request was about adding vibration to all buttons on the remote.
So, any chance you could add vibration to all buttons on the remote?
My feature request came from the intention to give Kore a tactile feedback.
So, it wouldn't make much sense to only have that tactile feedback on the arrow buttons.
All buttons on the remote should have that tactile feedback IMHO.
Any chance you could do it?
It would be much appreciated.
Regards
@wp9015362 I thought of adding tactile feedback everywhere, but that doesn't really provide any benefit. I have done some A/B testing earlier (for Windows Phone Kodi Remote) and that analytics data suggests that users prefer to have remote keys to vibrate.
Reasoning behind this could be that when you have all the buttons vibrating, it's same as all buttons not vibrating. In essence, there is no way for the user to differentiate between frequently used arrow keys and infrequently used other keys without looking at the phone.
Maybe you could try this version from this debug APK. Try and see if you like it. If not, I'll wait for one of the Team Kodi members to comment on it. I'm not sure if they even want to accept this.
@akshay2000 :
I disagree. I think having vibration on all buttons is great.
I know that it is great because I have an LG G2 smartphone.
The LG G2 / G3 / G4 smartphones come with an app called "LG Quick Remote".
Here you can see a video of it:
https://www.youtube.com/watch?v=5o6Z_YyfZXQ
The LG Quick Remote app has a setting called "Vibrate on tap".
When this setting is enabled, all buttons on the remote do vibrate when they are being pressed.
And this feels amazing.
What's also nice about the LG Quick Remote vibration is, that even when you press and hold a button on the remote, it continues to vibrate. But not with a constant vibration. When you hold down the channel plus or volume plus button for example it makes one short vibration after another with a little break inbetween, everytime a IR blast is being send out.
And it even makes the notification light blink once per IR blast.
It's really nice!
If Kore would do it like that, that would be really awesome.
Please do not just make the arrow buttons vibrate.
Please at least add an option to also make all buttons vibrate.
And, sorry, but I don't want to install APKs manually.
And I really hope that Team Kodi will accept this. Because when you have used LG's Quick Remote, you really miss the vibration feature in Kore.
Regards
PS:
It would also be nice if you could add an option to configure the strength of the vibration.
Regards
@wp9015362 Interesting. I had implemented the ability to send continuous vibration pulses when button is held down. But then I removed it.
At this point, I really don't know what the feature roadmap Team has planned for the app. And since you are not planning to install APK (I didn't inject malware, I promise!) there is no point in implementing further features. I don't even have a device to test vibrations. So, if the Team confirms that the feature is planned, then I will go ahead and implement it. If not, you're either stuck with no tactile feedback or manually managing APK.
I'm not opposed to adding vibration (as an option), and i'll take a look at this later. As for if all buttons should vibrate i don't have a strong opinion about this. I'll try it and see if i form an opinion.
@akshay2000 Good work on the remote for windows (haven't tried it, but looks good). Regarding this PR, i've just glanced at it, and it looks good. Just check if you need the vibrator on the remoteFragment, i believe it is superfluous.
@SyncedSynapse wrote:
I'm not opposed to adding vibration (as an option), and i'll take a look at this later. As for if all buttons should vibrate i don't have a strong opinion about this. I'll try it and see if i form an opinion.
Well, it would be nice if you would let the users decide for themselves via options.
You could add options for:
Configuring if the vibration is enabled on all buttons or only on the arrow buttons
Configuring the strength of the vibration
Configuring the length of the vibration
Configuring if the notification light blinks when a button is pressed
That way the users could configure how they like the vibration themselves.
That would be better than forcing one specific vibration config on the user.
Regards
i'm ok with the vibration as it's default off. We should not add too many option as it's overkill.
Well, yeah, okay, the options for the vibration strength/length and notification light might not be that much needed (but still nice to have anyway).
But an option to enable vibration for all buttons instead of only for the arrow buttons would be essential IMHO.
Because having vibration only for the arrow buttons would make the vibration feature pretty much useless. At least for me.
Regards
We should not add too many option as it's overkill.
Right. Too many choices paradox.
So, here is the plan for me:
Remove unnecessary references.
Enable vibration for all the remote buttons.
Just keep one setting to turn it on or off.
Enable continuous vibration pulse for buttons that send repeated signals.
By the way, @SyncedSynapse Is there a way we could get Windows Phone/Windows project under the umbrella along with iOS and Android apps?
I have added vibrations to all the buttons on remote. This request can now be merged.
@akshay2000 I still haven't been able to test this (i'm on the middle of something else), but looking at the code it looks like you've added vibration to all buttons, including play/ff/re and home/movies/etc.
I think these buttons shouldn't have vibration, as they are "normal" Android buttons and these don't have vibration associated with them. I think that vibration should only be applied to the d-pad and maybe the surrounding buttons, but not the "normal" ones. Also, vibration helps when the user is navigating Kodi's Ui and is not looking at the phone, and this only applies to the d-pad.
@SyncedSynapse That makes sense. In fact, that was my initial idea. I'll make the changes and push a commit.
Looks good, though i feel that 100ms vibration is a little too much. I might change this...
|
2025-04-01T06:40:58.019945
| 2024-02-12T14:39:20
|
2130277275
|
{
"authors": [
"JamieJQuinn",
"Nanoseb",
"pbartholomew08"
],
"license": "BSD-3-Clause",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:11994",
"repo": "xcompact3d/x3d2",
"url": "https://github.com/xcompact3d/x3d2/issues/36"
}
|
gharchive/issue
|
Investigate using pfUnit as formal testing framework
Currently unit tests are defined as regular subroutines and run through ctest. We may get better utilities using pfunit, like better float comparison, test tagging, running tests in parallel.
@slaizet said Thibault tried this out last year and described it as "painful" so I'd like to know what benefits we might get from pfunit before ripping out our current working test framework.
I've used pfUnit a bit in the past, but never done the setup from scratch so I am not sure how painful that part is. I think the main advantage is being able to cleanly run test with MPI. ctest can kind of do it (#20), but it seems fairly hacky.
An other option would be to use something like lit (https://llvm.org/docs/CommandGuide/lit.html). We are using it with Paul on an other fortran project and it is fairly straightforward.
I've used pFUnit in the past and found it a pain to work with, I would recommend ctest over lit.
@pbartholomew08 we're currently using ctest!
Considering many folks' poor experiences with pfunit, one fewer dependency to worry about, potential issues with f2008 + Nvidia's compilers, and the fact that ctest seems to be working fine for us now, I'm considering pfunit unsuitable.
|
2025-04-01T06:40:58.022774
| 2020-11-05T20:08:41
|
737232504
|
{
"authors": [
"krisukox",
"xd009642"
],
"license": "apache-2.0",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:11995",
"repo": "xd009642/tarpaulin",
"url": "https://github.com/xd009642/tarpaulin/pull/614"
}
|
gharchive/pull-request
|
Add avoid-cfg-tarpaulin flag
I created this pull request in order to avoid "cannot find attribute" error that comes from #[cfg_attr(tarpaulin, skip)] in some old dependencies.
In the project I use glutin that uses wayland-commons v0.21.13. When I want to check coverage using tarpaulin I get:
"Error: cannot find attribute `skip` in this scope"
--avoid-cfg-tarpaulin flag removes --cfg=tarpaulin from the RUSTFLAGS.
Do you agree to add such a flag?
Thanks in advance.
Yeah, I've endeavoured to get projects that haven't upgraded to upgrade via PRs (when I've spotted them) but some haven't yet or don't plan on releasing an update for a while. If you just update the changelog I'll approve and merge this.
Done
Awesome, thanks 👍
|
2025-04-01T06:40:58.025895
| 2024-04-24T11:58:15
|
2261122043
|
{
"authors": [
"jeroentja",
"xdan"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:11996",
"repo": "xdan/jodit",
"url": "https://github.com/xdan/jodit/issues/1118"
}
|
gharchive/issue
|
Remove formatting does not remove all formatting
Jodit Version: 4.1.12
Browser: Chrome
OS: Windows
Is React App: False
Content of the editor:
<p dir="ltr">example p tag</p>
Expected behavior:
Clicking on the 'remove formatting' button is removing 'dir="ltr"'.
Actual behavior:
Clicking on the 'remove formatting' button is not removing this
Hi, I checked in different editors, none of them deletes dir. Write how you think this should work, and where have you seen this behavior?
|
2025-04-01T06:40:58.042944
| 2019-09-19T19:48:48
|
495993911
|
{
"authors": [
"cboulay",
"cbrnr"
],
"license": "BSD-2-Clause",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:11997",
"repo": "xdf-modules/xdf-python",
"url": "https://github.com/xdf-modules/xdf-python/pull/36"
}
|
gharchive/pull-request
|
More explicit code for segment slicing in dejittering
I also added a warning when effective srate was more than 10% different from specified srate.
I think this looks good. +1 for merge.
Some general comments:
I'd prefer if everyone made PRs from a dedicated branch in their fork. This makes it easier for others to pull remote branches. This also means no branches other than master in this repo (upstream).
Our code should try to adhere to PEP8. I know especially the line length of 79 characters is sometimes a problem, but in most cases it is possible.
At least 2 members of the dev team should have seen a PR. Ideally, if a member makes a PR, another member should perform the merge (except of course for trivial changes like typos and whatnot).
WDYT? We could create a short how to contribute guide with these points. Do you have anything to add?
Re this PR: do you want to add an entry to CHANGELOG.md? This refactoring could be in a CHANGED section.
I'd prefer if everyone made PRs from a dedicated branch in their fork. This makes it easier for others to pull remote branches. This also means no branches other than master in this repo (upstream).
Sorry but I'm a maintainer on many different open source repos and it would be a nightmare to manage if I had personal forks for each of them. This is especially problematic for repos like this that are also submodules because I guarantee at some point I would end up having the parent repo pointing its submodule ref to my personal repo. I'll just delete the branch after merge.
Our code should try to adhere to PEP8. I know especially the line length of 79 characters is sometimes a problem, but in most cases it is possible.
I try to adhere to all aspects of PEP8 except the 79 character length. There's already quite a bit of debate about whether or not that should still be included. I'll try to do it for this project but I'm not going to change my IDE defaults on my 6 different computers, so I'll probably miss it sometimes.
At least 2 members of the dev team should have seen a PR. Ideally, if a member makes a PR, another member should perform the merge (except of course for trivial changes like typos and whatnot).
I agree with the second sentence. For the first sentence, sometimes you're going to be waiting months for me to reply to something fairly trivial and that's probably worse than having to revert something after the fact. If you think I was too hasty in merging a PR then let me know and we can work on adding proper unit tests for whatever I missed.
I won't ever make a tagged release without getting consensus so 99.999% of users (who rely on pypi) won't see any changes until well after we've had time to play with changes. If you want to do a develop/master model then that works too.
It's already hard enough for me to find time to work on these things, please don't add any more barriers. Community is more important than having a pristine repo.
Sorry but I'm a maintainer on many different open source repos and it would be a nightmare to manage if I had personal forks for each of them. This is especially problematic for repos like this that are also submodules because I guarantee at some point I would end up having the parent repo pointing its submodule ref to my personal repo. I'll just delete the branch after merge.
I have personal forks for most of the repos I contribute to, but it's OK if you want to work on branches in this repo.
I try to adhere to all aspects of PEP8 except the 79 character length. There's already quite a bit of debate about whether or not that should still be included. I'll try to do it for this project but I'm not going to change my IDE defaults on my 6 different computers, so I'll probably miss it sometimes.
That's OK. If I spot any long lines in any PRs I review I will try to shorten them if it's not too much work.
I agree with the second sentence. For the first sentence, sometimes you're going to be waiting months for me to reply to something fairly trivial and that's probably worse than having to revert something after the fact. If you think I was too hasty in merging a PR then let me know and we can work on adding proper unit tests for whatever I missed.
This comment wasn't intended to criticize something you did, it was just an idea which I thought made sense to implement. Having said that, if you agree that at least two devs should have seen a PR, this means that for the majority of PRs (which are made by a dev) another dev will have to review it.
I won't ever make a tagged release without getting consensus so 99.999% of users (who rely on pypi) won't see any changes until well after we've had time to play with changes. If you want to do a develop/master model then that works too.
No, that's fine, let's continue making releases based on consensus.
It's already hard enough for me to find time to work on these things, please don't add any more barriers. Community is more important than having a pristine repo.
I certainly didn't want to add more barriers. I thought that by providing some guidelines maintaining should be less of a burden, but we don't have to implement all or any of my suggestions.
Thanks @cboulay!
|
2025-04-01T06:40:58.060210
| 2023-09-21T18:16:38
|
1907518923
|
{
"authors": [
"carbaj03",
"javipacheco"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:11998",
"repo": "xebia-functional/xef",
"url": "https://github.com/xebia-functional/xef/issues/442"
}
|
gharchive/issue
|
Allow set token manually in conversation
OpenAI.conversation { }
When I use conversation block it always calls the env system, but I want to set the token manually.
OpenAI.conversation(here_token) { }
The problem:
@JvmField val FromEnvironment: OpenAI = OpenAI()
suspend fun <A> conversation(block: suspend Conversation.() -> A): A =
block(conversation(LocalVectorStore(FromEnvironment.DEFAULT_EMBEDDING)))
This always ends up calling the environment because the token is set to null by default.
class OpenAI(internal var token: String? = null, internal var host: String? = null)
For now, the token is part of the model not part of the Conversation. If you want to use a model with your own token you should do something like this:
OpenAI.conversation {
val chat = OpenAI(token = "your_token").DEFAULT_CHAT
val response = promptMessage("What is the meaning of life?, model = chat)
println(response)
}
Does that make sense for you?
There is a problem because to create OpenAI.conversation {, you call OpenAI with the token null always by default.
The second call OpenAI(token = "your_token").DEFAULT_CHAT it's okay.
It would be interesting if only one OpenAi() were created for the entire block. With Context Receivers this will be easier in the future.
Yes, this is a known error. FromEnvironment should be lazy... and I agree with you about the conversation block
|
2025-04-01T06:40:58.066697
| 2024-07-18T21:55:53
|
2417484819
|
{
"authors": [
"JReinhold",
"xeho91"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:11999",
"repo": "xeho91/svelte-ast-print",
"url": "https://github.com/xeho91/svelte-ast-print/issues/88"
}
|
gharchive/issue
|
Optional support for formatting
Have you considered adding optional support for formatting the output with Prettier/Biome?
You could have an optional peer dependency on prettier and prettier-plugin-svelte, and a format: 'prettier' option to trigger it:
const output = print(svelteAST, { format: 'prettier' });
Alternatively a small section on how to achieve it manually - eg. run the output through prettier before writing it to the file.
Just to make sure I understood the proposed idea. Let's focus on prettier to simplify it.
Placing in options { format: "prettier" } would trigger a post-formatting with it. Right?
So from my research, that means:
We would have to change print to be async function 😕 - see: https://prettier.io/docs/en/api#prettierformatsource-options
I'm not feeling in a favour of this, because I'd like to align closely with esrap functionality and API, and leave it to be synchronous. But I can be persuaded to look at it differently.
... I had a thought, but I lost it, so I'll leave it as placeholder.
Also, while working on this package, I'm seeing a possibility to refactor this API to implement more formatting options, where we wouldn't need any formatter at all. But rather a feature of auto-importing the formatting options from existing Prettier or Biome config. That's definitely on my mind 👍.
At the moment, I would like to focus on supporting printing TypeScript nodes, once maintainers of Svelte can find a time. So I don't get lost in my own refactor without having crucial features completed first. 😅
I'm definitely convinced on writing section guide with how to use with formatters API.
A quick example for prettier:
import fs from "node:fs";
import { format } from "prettier";
import { print } from "svelte-ast-print";
let ast; // ... there should be a part on how you get a svelte AST - either using parser from Svelte, or building your own(?)
const stringifiedCode = print(ast);
const formattedCode = await format(stringifiedCode, {
plugins: ["prettier-plugin-svelte"],
overrides: [{ "files": "*.svelte", "options": { "parser": "svelte" } }]
});
// ... there should be a part with what you want to do with the final output.
// For example saving to file:
fs.writeFileSync("Button.svelte", formattedCode);
The async argument is a good one, makes sense to keep the API sync.
A good example of how to combine this with Prettier would probably be best right now then. Probably using Prettier's auto-resolving of user config. https://prettier.io/docs/en/api.html#prettierresolveconfigfileurlorpath--options
I've created PR #93 with a small guide on how to get output formatted to the user's needs using Prettier. I'd love to get your review and feedback based on your input: @JReinhold @manuel3108.
|
2025-04-01T06:40:58.092689
| 2023-05-25T18:29:04
|
1726294056
|
{
"authors": [
"rssurdikar"
],
"license": "apache-2.0",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:12000",
"repo": "xerial/snappy-java",
"url": "https://github.com/xerial/snappy-java/issues/452"
}
|
gharchive/issue
|
Veracode Static scan - Very high security issues found with <IP_ADDRESS> version
Hi,
I am using snappy java's <IP_ADDRESS> jar in my application and when I did Veracode static scan, I found the following very high vulnerability issues.
Command or Argument Injection(1 flaw)
Description
Command or argument injection vulnerabilities occur when data enters an application from an untrusted source and is used to
dynamically construct and execute a command. In the case of OS command injection, an attacker may be able to either alter
the command executed by the application or append additional commands. In the case of argument injection, the attacker may
influence the behavior of the program in other ways, for example, by changing the destination of an outbound network request or
injecting additional commands into an argument or parameter. The command is typically executed with the privileges of the
executing process and gives an attacker a privilege or capability that he would not otherwise have.
Recommendations
Careful handling of all untrusted data is critical in preventing injection attacks. Using one or more of the following techniques
provides defense in depth and minimizes the likelihood of a vulnerability.
If possible, use library calls rather than external processes to recreate the desired functionality.
Validate user-supplied input using positive filters (white lists) to ensure that it conforms to the expected format, using
centralized data validation routines when possible.
Select safe API routines. Some APIs that execute system commands take an array of strings as input rather than a single
string, which protects against some forms of command injection by ensuring that a user-supplied argument cannot be
interpreted as part of the command.
Associated Flaws by CWE ID:
Improper Neutralization of Special Elements used in an OS Command ('OS Command Injection')(CWE ID 78)(1 flaw)
Description
This call contains a command injection flaw. The argument to the function is constructed using untrusted input. If an
attacker is allowed to specify all or part of the command, it may be possible to execute commands on the server with
the privileges of the executing process. The level of exposure depends on the effectiveness of input validation routines,
if any.
An effort to Fix: 3 - Complex implementation error. The fix is approx. 51-500 lines of code. Up to 5 days to fix.
Recommendations
Validate all untrusted input to ensure that it conforms to the expected format, using centralized data validation routines
when possible. When using black lists, be sure that the sanitizing routine performs a sufficient number of iterations to
remove all instances of disallowed characters. Most APIs that execute system commands also have a "safe" version of
the method that takes an array of strings as input rather than a single string, which protects against some forms of
command injection.
Instances found via Static Scan
| Flaw Id | Module # | Class # | Module | Location |
| 6512 | 130 | - | snappy-java-<IP_ADDRESS>.jar | org/.../xerial/snappy/OSInfo.java 178 |
Code:
int exitCode = Runtime.getRuntime().exec("which readelf").waitFor();
Untrusted Search Path(2 flaws)
Description
Executing commands or loading libraries from an untrusted source or in an untrusted environment can cause an application to
execute malicious commands (and payloads) on behalf of an attacker.
If an attacker is allowed to specify all or part of a filename to a certain API function, it may be possible to load arbitrary libraries.
In addition, certain functions perform automatic path searching, iterating over a list of paths to search for desired resources,
such as executables, libraries, or configuration files. If an attacker can modify the path, for example, by manipulating an
environment variable, he may be able to trick the program into referencing an attacker-controlled resource. Similarly, the
search path is static but an attacker is able to place a malicious copy of the resource higher in the search order than the file the
application intends to load, then the application will load the malicious version.
Recommendations
Avoid using user-supplied filenames or paths. When calling methods that load libraries or launch processes, ensure that full
paths are provided specifying the resource to be loaded.
Associated Flaws by CWE ID:
Process Control (CWE ID 114)(2 flaws)
Description
A function call could result in a process control attack. An argument to a process control function is either derived from
an untrusted source or is hard-coded, both of which may allow an attacker to execute malicious code under certain
conditions. If an attacker is allowed to specify all or part of the filename, it may be possible to load arbitrary libraries. If
the location is hard-coded and an attacker is able to place a malicious copy of the library higher in the search order than
the file the application intends to load, then the application will load the malicious version.
An effort to Fix: 2 - Implementation error. The fix is approx. 6-50 lines of code. 1 day to fix.
Recommendations
Always validate untrusted input to ensure that it conforms to the expected format, using centralized data validation
routines when possible. When using hard-coded file locations, use fully-qualified filenames to ensure the proper library
is being loaded.
Instances found via Static Scan
| Flaw Id | Module # | Class # | Module | Location |
| 6519 | 173 | - | snappy-java-<IP_ADDRESS>.jar | org/.../snappy/SnappyLoader.java 180 |
| 6522 | 173 | - | snappy-java-<IP_ADDRESS>.jar | org/.../snappy/SnappyLoader.java 183 |
Code:
if (nativeLibFile != null) {
// Load extracted or specified snappy java native library.
System.load(nativeLibFile.getAbsolutePath());
}
Thanks, @xerial. I did not know that previously. I will definitely use advisories next time. Let me close this issue right now.
|
2025-04-01T06:40:58.096164
| 2016-09-28T07:31:34
|
179699770
|
{
"authors": [
"Dunemaster"
],
"license": "apache-2.0",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:12001",
"repo": "xerial/sqlite-jdbc",
"url": "https://github.com/xerial/sqlite-jdbc/issues/161"
}
|
gharchive/issue
|
JDBC driver extracts a new dll library on every call of a java program into Temp/
I run into an issue, alreadt mentioned in bitbucket, but not here ([https://bitbucket.org/xerial/sqlite-jdbc/issues/202/jdbc-driver-extracts-a-new-dll-library-on])
I noticed this day that the my windows system drive is out of space after running a java program I wrote which imported 500k different files into a database.
Inspecting the issue using windows's "Disk Cleanup" shows around 129GB of data in user's temp folder, which turns out to be this sqlite jdbc dll driver, creating 500k different jdbc dlls on each call of the java my java program taking, each one taking 720kb of disk space.
The file name looks like this: sqlite-<IP_ADDRESS>-00a5dca3-f7cb-44b9-9014-de47cf377bd2-sqlitejdbc.dll where 00a5dca3-f7cb-44b9-9014-de47cf377bd2 part is always different.
Duplicate https://github.com/xerial/sqlite-jdbc/issues/80
|
2025-04-01T06:40:58.142223
| 2022-06-28T11:38:13
|
1287211444
|
{
"authors": [
"dsousa12",
"joaosantos99"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:12002",
"repo": "xgeekshq/split",
"url": "https://github.com/xgeekshq/split/issues/282"
}
|
gharchive/issue
|
[FIX]: Board Settings switchs not working properly
Description
When you change the state of "Option to post cards anonymously" the "Limit votes" are checked too, and if you uncheck the "Limit votes" and "Option to post cards anonymously" the "Limit votes" are checked again.
Steps to reproduce
Login in the app;
On Boards/Dashboard page access to a board;
Open Board Settings dialog;
Change status of "Option to post cards anonymously" and "Limit votes".
Expected result
The status of switchs not change the other switch state.
Actual result
When you change a state of "Option to post cards anonymously" the status of "Limit votes" change too.
I can't reproduce your error
|
2025-04-01T06:40:58.143924
| 2020-03-17T16:14:49
|
583130328
|
{
"authors": [
"TomTirapani"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:12003",
"repo": "xh/hoist-react",
"url": "https://github.com/xh/hoist-react/issues/1780"
}
|
gharchive/issue
|
Grid exports are rendered upside down
When exporting from Toolbox, Excel renders with an upside down font:
Closing - was due to incompatibility between an outdated version of Excel and Mac OS Catalina.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.