id stringlengths 4 10 | text stringlengths 4 2.14M | source stringclasses 2
values | created timestamp[s]date 2001-05-16 21:05:09 2025-01-01 03:38:30 | added stringdate 2025-04-01 04:05:38 2025-04-01 07:14:06 | metadata dict |
|---|---|---|---|---|---|
246150379 | Deprecated selector in drag-drop-text\styles\drag-drop-text.less
In drag-drop-text\styles\drag-drop-text.less:
Starting from Atom v1.13.0, the contents of atom-text-editor elements are no longer encapsulated within a shadow DOM boundary. This means you should stop using :host and ::shadow pseudo-selectors, and prepend all your syntax selectors with syntax--. To prevent breakage with existing style sheets, Atom will automatically upgrade the following selectors:
atom-text-editor::shadow .drag-drop-text .region => atom-text-editor.editor .drag-drop-text .region
Automatic translation of selectors will be removed in a few release cycles to minimize startup time. Please, make sure to upgrade the above selectors as soon as possible.
not working
Uploaded an update to the drag-drop-text.less file (#11). This bug can be marked as closed after merging my branch back with main and releasing the new version by @mark-hahn.
FYI: as per the error message, Atom implements a temporary fix for this issue but deprecation-cop still flags it.
My fix is simply just making the temporary fix permanent by removing ::shadow from the code.
Issues #7, #9 & #10 are all for the same issue.
Fixed
| gharchive/issue | 2017-07-27T19:52:28 | 2025-04-01T06:44:53.677403 | {
"authors": [
"HAfsari",
"keni8em",
"noseGoblin"
],
"repo": "mark-hahn/drag-drop-text",
"url": "https://github.com/mark-hahn/drag-drop-text/issues/7",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
2106661230 | Quadratic output size explosion with tables extension
Same issue as https://github.com/raphlinus/pulldown-cmark/issues/832 and https://github.com/jgm/commonmark-hs/issues/145.
The problem is, because the table extension fills in missing table cells, you can force the output to grow as the square of the input by adding one column and one row. This is a side-effect of the extension as specified, and follows from the geometric definition of "squaring": the size of the output is proportional to the area of a square, but the input is proportional to the parameter.
$ python -c 'N=100; print("x|" * N + "\n" + "-|" * N + "\n" + "x|\n" * N)' | ./bin/markdown-it.mjs | wc -c
102362
$ python -c 'N=1000; print("x|" * N + "\n" + "-|" * N + "\n" + "x|\n" * N)' | ./bin/markdown-it.mjs | wc -c
10023062
$ python -c 'N=10000; print("x|" * N + "\n" + "-|" * N + "\n" + "x|\n" * N)' | ./bin/markdown-it.mjs | wc -c
FATAL ERROR: Reached heap limit Allocation failed - JavaScript heap out of memory
The output size grows roughly as square of the input size: factor of 10 growth on input means factor of 100 growth on output.
cmark-gfm avoids this problem by putting a limit on the number of cells it will "autocomplete." md4c simply limits the number of columns.
Fixed the same way as in the two parsers linked above (i.e. limit the number of autocomplete cells).
Thanks for reporting!
| gharchive/issue | 2024-01-29T23:47:49 | 2025-04-01T06:44:53.693257 | {
"authors": [
"notriddle",
"rlidwka"
],
"repo": "markdown-it/markdown-it",
"url": "https://github.com/markdown-it/markdown-it/issues/1000",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
1448582256 | Retrieving element values without range indexes in MarkLogic
I have the following situation in MarkLogic. I have a database with a large number of documents which each hold hundreds of fields, And I want to provide a feature where user need to sort with any attributes. I don't want to add all attributes in range index. So could I achieve this.
Important things is we required pagination.
I've had plenty of customers say, "We want to sort on any field". In practice though, sorting on most fields didn't make any sense. For example, if you have 1 million customers and you sort on last name, that's only going to be helpful for finding customers with last names starting with A or Z. Free text search is usually a far better approach.
Thus, I've found that sorting on date/dateTime fields to be very helpful; sorting on numeric fields to sometimes be helpful (bucket facets usually work better); and sorting on string fields to rarely be helpful. If a string field has a bounded set of values, a facet is almost always the best choice (though that of course requires a range index).
So my first recommendation is - consider which attributes of documents would really be valuable for sorting, and add range indexes to those, and you're done.
If that doesn't suffice, then you've probably already realized that sorting by a non-indexed field is not going to scale well, regardless of what database / search engine you use. The next option with MarkLogic would be to consider using the Optic API as a way of indexing values without the higher memory cost of range indexes (the Optic API will index rows via the triple index, which is not pinned into memory like range indexes are). You'd need to use something like a TDE to define all the attributes though so that ML can index rows. Given that you have hundreds of fields, that's going to be a large TDE that results in additional disk space being used for all those rows to be indexed.
The above approach is doable, but... I go back to my original point, which is that I've found it to be extremely rare for "Sort by any field" to be worth it in terms of cost/benefit. Figuring out the (likely small number of) fields that truly require sorting will likely save you a lot of time and money.
Closing, but please reply back if you'd like to continue the conversation about your customer's use case.
| gharchive/issue | 2022-11-14T19:01:42 | 2025-04-01T06:44:53.703746 | {
"authors": [
"dalbir",
"rjrudin"
],
"repo": "marklogic/node-client-api",
"url": "https://github.com/marklogic/node-client-api/issues/690",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2207128404 | [feature-request] Support editing directly in markmap? And sync between markdown and markmap
Current markmap is a readonly mindmap view of markdown.
Is it possible to support editing directly in markmap?
All modifications are sync bewteen the markdown and markmap.
why editing in markmap (mindmap)?
it's more intuitive,
it's more convinient to add sibling (enter key) and children (tab key) node
it's much easier to fold/unfold.
why editing in mindmap and sync to markdown?
markdown is the actual storage file of this mindmap
it's (kinda) a public format
beside viewed as mindmap, it can change to another view (via pandoc), e.g., pdf-like view, ppt-like view.
This is a good idea, but it's not feasible because the conversion from Markdown to Markmap is not lossless, and the two formats are not equivalent. So if we want to sync data from both sides, we can only support a subset of features, which doesn't seem like a good idea to me. Therefore, I prefer using Markmap as a rendering tool only, similar to Mermaid.
| gharchive/issue | 2024-03-26T02:52:20 | 2025-04-01T06:44:53.707092 | {
"authors": [
"donaldwuid",
"gera2ld"
],
"repo": "markmap/markmap",
"url": "https://github.com/markmap/markmap/issues/238",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
113850805 | 2^-1 throws exception
PRINT 2^-1 throws the exception LearningBasic.Parsing.ParserException: Missing variable or reference to an array element.
Expected behavior: printing 0.5 because 2-1=1/21=0.5.
Fixed while solving Techdebt #6.
| gharchive/issue | 2015-10-28T15:40:27 | 2025-04-01T06:44:53.714446 | {
"authors": [
"markshevchenko"
],
"repo": "markshevchenko/learningbasic",
"url": "https://github.com/markshevchenko/learningbasic/issues/1",
"license": "unlicense",
"license_type": "permissive",
"license_source": "bigquery"
} |
72412332 | Cakephp 3.0 Asset Compress
Hi,
Are you working on cakephp 3.0 asset compress plugin, as i am not able to use it in my cakephp 3.0 application.
Can you please suggest me on this.
Thanks,
JP
Yes. The 3.0 branch works with cake3. I've got a few deployed sites using it.
| gharchive/issue | 2015-05-01T11:28:28 | 2025-04-01T06:44:53.715714 | {
"authors": [
"jayprakashj",
"markstory"
],
"repo": "markstory/asset_compress",
"url": "https://github.com/markstory/asset_compress/issues/266",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
855063707 | Feature Request: Free Beds in hospitals
Story:
The user wants to get information about the beds in hospitals
Todo:
Add fetching, parsing, saving and accessing the data
Datasource:
https://www.intensivregister.de/#/aktuelle-lage/reports
(https://diviexchange.blob.core.windows.net/%24web/DIVI_Intensivregister_Auszug_pro_Landkreis.csv)
Hi, you can also use the API from DIVI directly. It would be kind of unnecessary to host it here if the organisation already offers it itself.
Source: https://www.intensivregister.de/api/public/reporting/laendertabelle
Duplicate #73
| gharchive/issue | 2021-04-10T13:05:40 | 2025-04-01T06:44:53.779709 | {
"authors": [
"ChristophKrause",
"HappyFr1tz",
"Rubber1Duck"
],
"repo": "marlon360/rki-covid-api",
"url": "https://github.com/marlon360/rki-covid-api/issues/163",
"license": "CC-BY-4.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1648166343 | Window Heating and climatisation
Hi, it's possible to separate the window heating and climatisation ?
thank you for this plugin
It's perfect For me!
Thank's you very much
| gharchive/issue | 2023-03-30T19:08:51 | 2025-04-01T06:44:53.835367 | {
"authors": [
"Libar19"
],
"repo": "marn06/homebridge-vw",
"url": "https://github.com/marn06/homebridge-vw/issues/4",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1435430068 | [ENHANCEMENT] Non-vectorised Fields
Overview
Currently tensors are produced for all string and image fields. The fixed size of tensors stored can lead to a significant increase in storage for text fields. There are many application and use cases where certain fields need only be keyword/lexically searched.
This is a feature request to allow end users to highlight text fields (or partial sub-strings) to skip tensorisation.
Proposed Design
The proposed design is to allow users to specific, per document if the field should have a tensor created. The default is for that each field will have a tensor, and a deny list, per add_documents, will specify if inference and tensorisation should be skipped.
Marqo Client
For the client, this would look like:
mq.index("an-index").add_documents([
{
"Title": "Palamedes",
"Description": "Palamedes is set in the days before King Arthur's reign, and describes the adventures of the fathers of Arthur, Tristan, Erec and other knights of Camelot.",
"ReferenceLocation": "ISBN"
"Author": "Rustichello da Pisa"
}],
non_tensor_fields=["Author", "ReferenceLocation"]
)
Marqo
The kwarg non_tensor_fields will be a query parameter on the /indexes/{index_name}/documents POST. Since there is no index level defaults, marqo instances will default to vectorising all fields, f that don't have non_tensor_fields=f.
Alternative - Field Level DSL
An alternate design draws inspiration from special tokens within NLP (e.g. stop/start tokens) to let users designate non-tensor text on a per-character (and therefore, per field).
Marqo Client
When adding documents to an index, a user can specify that no tensor should be made for text, via tokens:
mq.index("an-index").add_documents([
{
"Title": "<marqo no_tensor>The Travels of Marco Polo </marqo no_tensor>",
"Description": "A 13th-century travelogue describing Polo's travels",
}
])
Or with py-marqo library support:
mq.index("an-index").add_documents([
{
"Title": marqo.without_tensor("The Travels of Marco Polo"),
"Description": "A 13th-century travelogue describing Polo's travels",
}
])
Which supports embedding non-tensor text within a larger, tensor field.
mq.index("an-index").add_documents([
{
"Title": f"Start of the text is tensored. {marqo.without_tensor("The Travels of Marco Polo")} The end of the text is tensored.",
"Description": "A 13th-century travelogue describing Polo's travels",
}
])
This provide two forms of flexibility:
A per-character designation for what should have a tensors.
A token/syntax DSL for further text-based functionality (e.g. emphasisations, chunk control)
Marqo
Marqo is responsible for parsing and understanding the text DSL/tokens, chunking and constructing tensors for the appropriate text sections, and storing the tensors accordingly. Marqo must do two new things:
Chunk text without the ignored text sections (and skip chunking & inference if the entire field is ignored)
Save the full-text for lexical search support.
Storage
No changes to storage will be needed.
Problems
It does force the client to constantly not tensorise their fields.
Users must explicitly ask for lexical search for non-tensor sections.
Alternatives - Field level Denylist
An alternative approach is to specific text to not be tensorised on a field level (instead of a per document basis). This would allow users to specific on both index and document creation, if a field should not be tensorised.
Marqo Client
On a new index
mq.create_index("new-index", {
"non_tensor_fields": ["Title", "another_text_field"],
# Other settings, as before
"model":"ViT-L/14"
})
# Only `Description` field will have an associated tensor.
mq.index("new-index").add_documents([
{
"Title": "The Travels of Marco Polo",
"Description": "A 13th-century travelogue describing Polo's travels",
"another_text_field": "ISBN"
},
{
"Title": "Extravehicular Mobility Unit (EMU)",
"Description": "The EMU is a spacesuit that provides environmental protection, "
"mobility, life support, and communications for astronauts",
"_id": "article_591",
"another_text_field": "online"
}
])
Or on a new document
mq.index("new-index").add_documents([
{
"Title": "Palamedes",
"Description": "Palamedes is set in the days before King Arthur's reign, and describes the adventures of the fathers of Arthur, Tristan, Erec and other knights of Camelot.",
"another_text_field": "ISBN"
"new_text_field": "Rustichello da Pisa"
}],
non_tensor_fields=["another_text_field", "new_text_field"] # This applies for all documents going forward
)
As per the above add_documents example, both new and existing fields can be converted to non-tensor fields. Converting from non-tensor fields to tensor fields is not within the scope of this feature request as it would cause an unexpected search experience (a field now having tensors to search over will miss the original documents that were not tensorised). non_tensor_fields, then, represents a one-way door on index fields.
Marqo
Non-tensor fields can be stored at an index level, specifically, in the _meta.index_settings field from the _mappings call (see get_index_info). This information is available when adding documents. Marqo can then determine which fields to perform inference on. Cache staleness on index information only has the penalty of unneeded inference. When a field is marked as non-tensor, tensors can still be stored against them (i.e. this will avoid issues with rolling update of index information on marqo instances).
Storage
As mentioned above, whether a field should have tensors is denylisted. Further, it can be stored in an index level on the JSON structure key ._meta.index_settings. add_index already updates this attribute. If non_tensor_fields is non-empty with new fields, add_documents will have to update the index info.
Open Issues
Can we extend this to treat_urls_and_pointers_as_images to be a per-field level?
Easier on the alternative approach, not the original.
Required Changes
marqo-ai/marqo
Add parameter, non_tensor_fields to add_documents_orchestrator
Add parameter, non_tensor_fields to add_documents
Add query parameter to and pass through to add_documents_orchestrator:
@app.post("/indexes/{index_name}/documents")
@app.put("/indexes/{index_name}/documents")
In add_documents update line:40, if isinstance(field_content, (str, Image.Image)): to exclude on non_tensor_fields
marqo-ai/py-marqo
Add kwarg non_tensor_fields to add_documents. Default to empty list.
Pass through into requests made in non_tensor_fields
marqo-ai/marqodocs
Update query parameter in ./docs/API-Reference/documents.md for methods:
POST /indexes/{index_name}/documents
PUT /indexes/{index_name}/documents
| gharchive/issue | 2022-11-04T01:28:27 | 2025-04-01T06:44:53.851494 | {
"authors": [
"Jeadie"
],
"repo": "marqo-ai/marqo",
"url": "https://github.com/marqo-ai/marqo/issues/157",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1107066915 | [BUG]run on ray put failed
I try to run mars on my ray cluster , every nodes of ray cluster run 'import mars' done's well.when I run following example code from readme , got some error
code:
import ray
ray.init(address="ray://172.29.0.89:10001")
import mars
mars.new_ray_session(worker_num=5)
error:
Put failed:
.........
Can't get attribute 'RayMainPool.mark_service_ready' on <module 'mars.oscar.backends.ray.pool' from '/root/miniconda3/envs/raycluster/lib/python3.9/site-packages/mars/oscar/backends/ray/pool.py'>
Could you provide your mars version and paste the full error message?
when cluster all nodes has install pymars 0.8.0
mars.new_ray_session(worker_num=5)
Traceback (most recent call last):
File "python/ray/_raylet.pyx", line 759, in ray._raylet.task_execution_handler
File "python/ray/_raylet.pyx", line 494, in ray._raylet.execute_task
File "/home/panhl/miniconda3/envs/raycluster/lib/python3.9/site-packages/ray/_private/function_manager.py", line 512, in temporary_actor_method
raise RuntimeError(
RuntimeError: The actor with name RaySubPool failed to import on the worker. This may be because needed library dependencies are not installed in the worker environment:
Traceback (most recent call last):
File "/home/panhl/miniconda3/envs/raycluster/lib/python3.9/site-packages/ray/_private/function_manager.py", line 564, in _load_actor_class_from_gcs
actor_class = pickle.loads(pickled_class)
AttributeError: Can't get attribute 'RaySubPool.mark_service_ready' on <module 'mars.oscar.backends.ray.pool' from '/home/panhl/miniconda3/envs/raycluster/lib/python3.9/site-packages/mars/oscar/backends/ray/pool.py'>
An unexpected internal error occurred while the worker was executing a task.
Traceback (most recent call last):
File "python/ray/_raylet.pyx", line 759, in ray._raylet.task_execution_handler
File "python/ray/_raylet.pyx", line 494, in ray._raylet.execute_task
File "/home/panhl/miniconda3/envs/raycluster/lib/python3.9/site-packages/ray/_private/function_manager.py", line 512, in temporary_actor_method
raise RuntimeError(
RuntimeError: The actor with name RaySubPool failed to import on the worker. This may be because needed library dependencies are not installed in the worker environment:
Traceback (most recent call last):
File "/home/panhl/miniconda3/envs/raycluster/lib/python3.9/site-packages/ray/_private/function_manager.py", line 564, in _load_actor_class_from_gcs
actor_class = pickle.loads(pickled_class)
AttributeError: Can't get attribute 'RaySubPool.mark_service_ready' on <module 'mars.oscar.backends.ray.pool' from '/home/panhl/miniconda3/envs/raycluster/lib/python3.9/site-packages/mars/oscar/backends/ray/pool.py'>
An unexpected internal error occurred while the worker was executing a task.
A worker died or was killed while executing a task by an unexpected system error. To troubleshoot the problem, check the logs for the dead worker. RayTask ID: ffffffffffffffff2ea5291bd9dc0d577012de214a000000 Worker ID: 4c55520305008c9303709fcc4bd652d3c6ce9e864be823d6e87cd7ec Node ID: 34598749f0b111985114f32a0edb0262256574319ebf1b92225ac895 Worker IP address: 172.24.14.249 Worker port: 10226 Worker PID: 1755519
A worker died or was killed while executing a task by an unexpected system error. To troubleshoot the problem, check the logs for the dead worker. RayTask ID: ffffffffffffffff4e466a12709c8a18d63316884a000000 Worker ID: df2bce56de6519d5871722cb0a7d6f217aae21563207f38cea00e6ad Node ID: 34598749f0b111985114f32a0edb0262256574319ebf1b92225ac895 Worker IP address: 172.24.14.249 Worker port: 10225 Worker PID: 1755518
(pid=None, ip=172.24.14.249) 2022-01-20 09:05:21,205 ERROR worker.py:431 -- SystemExit was raised from the worker
(pid=None, ip=172.24.14.249) Traceback (most recent call last):
(pid=None, ip=172.24.14.249) File "python/ray/_raylet.pyx", line 759, in ray._raylet.task_execution_handler
(pid=None, ip=172.24.14.249) File "python/ray/_raylet.pyx", line 494, in ray._raylet.execute_task
(pid=None, ip=172.24.14.249) File "/home/panhl/miniconda3/envs/raycluster/lib/python3.9/site-packages/ray/_private/function_manager.py", line 512, in temporary_actor_method
(pid=None, ip=172.24.14.249) raise RuntimeError(
(pid=None, ip=172.24.14.249) RuntimeError: The actor with name RaySubPool failed to import on the worker. This may be because needed library dependencies are not installed in the worker environment:
(pid=None, ip=172.24.14.249)
(pid=None, ip=172.24.14.249) Traceback (most recent call last):
(pid=None, ip=172.24.14.249) File "/home/panhl/miniconda3/envs/raycluster/lib/python3.9/site-packages/ray/_private/function_manager.py", line 564, in _load_actor_class_from_gcs
(pid=None, ip=172.24.14.249) actor_class = pickle.loads(pickled_class)
(pid=None, ip=172.24.14.249) AttributeError: Can't get attribute 'RaySubPool.mark_service_ready' on <module 'mars.oscar.backends.ray.pool' from '/home/panhl/miniconda3/envs/raycluster/lib/python3.9/site-packages/mars/oscar/backends/ray/pool.py'>
(pid=None, ip=172.24.14.249)
(pid=None, ip=172.24.14.249)
(pid=None, ip=172.24.14.249) During handling of the above exception, another exception occurred:
(pid=None, ip=172.24.14.249)
(pid=None, ip=172.24.14.249) Traceback (most recent call last):
(pid=None, ip=172.24.14.249) File "python/ray/_raylet.pyx", line 788, in ray._raylet.task_execution_handler
(pid=None, ip=172.24.14.249) SystemExit
(pid=None, ip=172.24.14.249) 2022-01-20 09:05:21,206 ERROR worker.py:431 -- SystemExit was raised from the worker
(pid=None, ip=172.24.14.249) Traceback (most recent call last):
(pid=None, ip=172.24.14.249) File "python/ray/_raylet.pyx", line 759, in ray._raylet.task_execution_handler
(pid=None, ip=172.24.14.249) File "python/ray/_raylet.pyx", line 494, in ray._raylet.execute_task
(pid=None, ip=172.24.14.249) File "/home/panhl/miniconda3/envs/raycluster/lib/python3.9/site-packages/ray/_private/function_manager.py", line 512, in temporary_actor_method
(pid=None, ip=172.24.14.249) raise RuntimeError(
(pid=None, ip=172.24.14.249) RuntimeError: The actor with name RaySubPool failed to import on the worker. This may be because needed library dependencies are not installed in the worker environment:
(pid=None, ip=172.24.14.249)
(pid=None, ip=172.24.14.249) Traceback (most recent call last):
(pid=None, ip=172.24.14.249) File "/home/panhl/miniconda3/envs/raycluster/lib/python3.9/site-packages/ray/_private/function_manager.py", line 564, in _load_actor_class_from_gcs
(pid=None, ip=172.24.14.249) actor_class = pickle.loads(pickled_class)
(pid=None, ip=172.24.14.249) AttributeError: Can't get attribute 'RaySubPool.mark_service_ready' on <module 'mars.oscar.backends.ray.pool' from '/home/panhl/miniconda3/envs/raycluster/lib/python3.9/site-packages/mars/oscar/backends/ray/pool.py'>
(pid=None, ip=172.24.14.249)
(pid=None, ip=172.24.14.249)
(pid=None, ip=172.24.14.249) During handling of the above exception, another exception occurred:
(pid=None, ip=172.24.14.249)
(pid=None, ip=172.24.14.249) Traceback (most recent call last):
(pid=None, ip=172.24.14.249) File "python/ray/_raylet.pyx", line 788, in ray._raylet.task_execution_handler
(pid=None, ip=172.24.14.249) SystemExit
Traceback (most recent call last):
File "python/ray/_raylet.pyx", line 759, in ray._raylet.task_execution_handler
File "python/ray/_raylet.pyx", line 494, in ray._raylet.execute_task
File "/home/panhl/miniconda3/envs/raycluster/lib/python3.9/site-packages/ray/_private/function_manager.py", line 512, in temporary_actor_method
raise RuntimeError(
RuntimeError: The actor with name RaySubPool failed to import on the worker. This may be because needed library dependencies are not installed in the worker environment:
Traceback (most recent call last):
File "/home/panhl/miniconda3/envs/raycluster/lib/python3.9/site-packages/ray/_private/function_manager.py", line 564, in _load_actor_class_from_gcs
actor_class = pickle.loads(pickled_class)
AttributeError: Can't get attribute 'RaySubPool.mark_service_ready' on <module 'mars.oscar.backends.ray.pool' from '/home/panhl/miniconda3/envs/raycluster/lib/python3.9/site-packages/mars/oscar/backends/ray/pool.py'>
An unexpected internal error occurred while the worker was executing a task.
A worker died or was killed while executing a task by an unexpected system error. To troubleshoot the problem, check the logs for the dead worker. RayTask ID: ffffffffffffffff2ea5291bd9dc0d577012de214a000000 Worker ID: 160978644c4ecb4749cfc32fc3a53858d9e4c893f0b575e26e8027df Node ID: 34598749f0b111985114f32a0edb0262256574319ebf1b92225ac895 Worker IP address: 172.24.14.249 Worker port: 10227 Worker PID: 1755571
Traceback (most recent call last):
File "python/ray/_raylet.pyx", line 759, in ray._raylet.task_execution_handler
File "python/ray/_raylet.pyx", line 494, in ray._raylet.execute_task
File "/home/panhl/miniconda3/envs/raycluster/lib/python3.9/site-packages/ray/_private/function_manager.py", line 512, in temporary_actor_method
raise RuntimeError(
RuntimeError: The actor with name RaySubPool failed to import on the worker. This may be because needed library dependencies are not installed in the worker environment:
Traceback (most recent call last):
File "/home/panhl/miniconda3/envs/raycluster/lib/python3.9/site-packages/ray/_private/function_manager.py", line 564, in _load_actor_class_from_gcs
actor_class = pickle.loads(pickled_class)
AttributeError: Can't get attribute 'RaySubPool.mark_service_ready' on <module 'mars.oscar.backends.ray.pool' from '/home/panhl/miniconda3/envs/raycluster/lib/python3.9/site-packages/mars/oscar/backends/ray/pool.py'>
An unexpected internal error occurred while the worker was executing a task.
A worker died or was killed while executing a task by an unexpected system error. To troubleshoot the problem, check the logs for the dead worker. RayTask ID: ffffffffffffffff4e466a12709c8a18d63316884a000000 Worker ID: 29d99b42617cb5c766124130a6f7392ca263e5ad389099701ea0a209 Node ID: 34598749f0b111985114f32a0edb0262256574319ebf1b92225ac895 Worker IP address: 172.24.14.249 Worker port: 10228 Worker PID: 1755572
(pid=None, ip=172.24.14.249) 2022-01-20 09:05:22,140 ERROR worker.py:431 -- SystemExit was raised from the worker
(pid=None, ip=172.24.14.249) Traceback (most recent call last):
(pid=None, ip=172.24.14.249) File "python/ray/_raylet.pyx", line 759, in ray._raylet.task_execution_handler
(pid=None, ip=172.24.14.249) File "python/ray/_raylet.pyx", line 494, in ray._raylet.execute_task
(pid=None, ip=172.24.14.249) File "/home/panhl/miniconda3/envs/raycluster/lib/python3.9/site-packages/ray/_private/function_manager.py", line 512, in temporary_actor_method
(pid=None, ip=172.24.14.249) raise RuntimeError(
(pid=None, ip=172.24.14.249) RuntimeError: The actor with name RaySubPool failed to import on the worker. This may be because needed library dependencies are not installed in the worker environment:
(pid=None, ip=172.24.14.249)
(pid=None, ip=172.24.14.249) Traceback (most recent call last):
(pid=None, ip=172.24.14.249) File "/home/panhl/miniconda3/envs/raycluster/lib/python3.9/site-packages/ray/_private/function_manager.py", line 564, in _load_actor_class_from_gcs
(pid=None, ip=172.24.14.249) actor_class = pickle.loads(pickled_class)
(pid=None, ip=172.24.14.249) AttributeError: Can't get attribute 'RaySubPool.mark_service_ready' on <module 'mars.oscar.backends.ray.pool' from '/home/panhl/miniconda3/envs/raycluster/lib/python3.9/site-packages/mars/oscar/backends/ray/pool.py'>
(pid=None, ip=172.24.14.249)
(pid=None, ip=172.24.14.249)
(pid=None, ip=172.24.14.249) During handling of the above exception, another exception occurred:
(pid=None, ip=172.24.14.249)
(pid=None, ip=172.24.14.249) Traceback (most recent call last):
(pid=None, ip=172.24.14.249) File "python/ray/_raylet.pyx", line 788, in ray._raylet.task_execution_handler
(pid=None, ip=172.24.14.249) SystemExit
(pid=None, ip=172.24.14.249) 2022-01-20 09:05:22,164 ERROR worker.py:431 -- SystemExit was raised from the worker
(pid=None, ip=172.24.14.249) Traceback (most recent call last):
(pid=None, ip=172.24.14.249) File "python/ray/_raylet.pyx", line 759, in ray._raylet.task_execution_handler
(pid=None, ip=172.24.14.249) File "python/ray/_raylet.pyx", line 494, in ray._raylet.execute_task
(pid=None, ip=172.24.14.249) File "/home/panhl/miniconda3/envs/raycluster/lib/python3.9/site-packages/ray/_private/function_manager.py", line 512, in temporary_actor_method
(pid=None, ip=172.24.14.249) raise RuntimeError(
(pid=None, ip=172.24.14.249) RuntimeError: The actor with name RaySubPool failed to import on the worker. This may be because needed library dependencies are not installed in the worker environment:
(pid=None, ip=172.24.14.249)
(pid=None, ip=172.24.14.249) Traceback (most recent call last):
(pid=None, ip=172.24.14.249) File "/home/panhl/miniconda3/envs/raycluster/lib/python3.9/site-packages/ray/_private/function_manager.py", line 564, in _load_actor_class_from_gcs
(pid=None, ip=172.24.14.249) actor_class = pickle.loads(pickled_class)
(pid=None, ip=172.24.14.249) AttributeError: Can't get attribute 'RaySubPool.mark_service_ready' on <module 'mars.oscar.backends.ray.pool' from '/home/panhl/miniconda3/envs/raycluster/lib/python3.9/site-packages/mars/oscar/backends/ray/pool.py'>
(pid=None, ip=172.24.14.249)
(pid=None, ip=172.24.14.249)
(pid=None, ip=172.24.14.249) During handling of the above exception, another exception occurred:
(pid=None, ip=172.24.14.249)
(pid=None, ip=172.24.14.249) Traceback (most recent call last):
(pid=None, ip=172.24.14.249) File "python/ray/_raylet.pyx", line 788, in ray._raylet.task_execution_handler
(pid=None, ip=172.24.14.249) SystemExit
Traceback (most recent call last):
File "python/ray/_raylet.pyx", line 759, in ray._raylet.task_execution_handler
File "python/ray/_raylet.pyx", line 494, in ray._raylet.execute_task
File "/home/panhl/miniconda3/envs/raycluster/lib/python3.9/site-packages/ray/_private/function_manager.py", line 512, in temporary_actor_method
raise RuntimeError(
RuntimeError: The actor with name RaySubPool failed to import on the worker. This may be because needed library dependencies are not installed in the worker environment:
Traceback (most recent call last):
File "/home/panhl/miniconda3/envs/raycluster/lib/python3.9/site-packages/ray/_private/function_manager.py", line 564, in _load_actor_class_from_gcs
actor_class = pickle.loads(pickled_class)
AttributeError: Can't get attribute 'RaySubPool.mark_service_ready' on <module 'mars.oscar.backends.ray.pool' from '/home/panhl/miniconda3/envs/raycluster/lib/python3.9/site-packages/mars/oscar/backends/ray/pool.py'>
An unexpected internal error occurred while the worker was executing a task.
A worker died or was killed while executing a task by an unexpected system error. To troubleshoot the problem, check the logs for the dead worker. RayTask ID: ffffffffffffffff2ea5291bd9dc0d577012de214a000000 Worker ID: 3497d448d1848684c31ef617bf15ecaeee9f35df99a9fdfc81df76dc Node ID: 34598749f0b111985114f32a0edb0262256574319ebf1b92225ac895 Worker IP address: 172.24.14.249 Worker port: 10229 Worker PID: 1755621
(pid=None, ip=172.24.14.249) 2022-01-20 09:05:23,060 ERROR worker.py:431 -- SystemExit was raised from the worker
(pid=None, ip=172.24.14.249) Traceback (most recent call last):
(pid=None, ip=172.24.14.249) File "python/ray/_raylet.pyx", line 759, in ray._raylet.task_execution_handler
(pid=None, ip=172.24.14.249) File "python/ray/_raylet.pyx", line 494, in ray._raylet.execute_task
(pid=None, ip=172.24.14.249) File "/home/panhl/miniconda3/envs/raycluster/lib/python3.9/site-packages/ray/_private/function_manager.py", line 512, in temporary_actor_method
(pid=None, ip=172.24.14.249) raise RuntimeError(
(pid=None, ip=172.24.14.249) RuntimeError: The actor with name RaySubPool failed to import on the worker. This may be because needed library dependencies are not installed in the worker environment:
(pid=None, ip=172.24.14.249)
(pid=None, ip=172.24.14.249) Traceback (most recent call last):
(pid=None, ip=172.24.14.249) File "/home/panhl/miniconda3/envs/raycluster/lib/python3.9/site-packages/ray/_private/function_manager.py", line 564, in _load_actor_class_from_gcs
(pid=None, ip=172.24.14.249) actor_class = pickle.loads(pickled_class)
(pid=None, ip=172.24.14.249) AttributeError: Can't get attribute 'RaySubPool.mark_service_ready' on <module 'mars.oscar.backends.ray.pool' from '/home/panhl/miniconda3/envs/raycluster/lib/python3.9/site-packages/mars/oscar/backends/ray/pool.py'>
(pid=None, ip=172.24.14.249)
(pid=None, ip=172.24.14.249)
(pid=None, ip=172.24.14.249) During handling of the above exception, another exception occurred:
(pid=None, ip=172.24.14.249)
(pid=None, ip=172.24.14.249) Traceback (most recent call last):
(pid=None, ip=172.24.14.249) File "python/ray/_raylet.pyx", line 788, in ray._raylet.task_execution_handler
(pid=None, ip=172.24.14.249) SystemExit
(pid=None, ip=172.24.14.249) [2022-01-20 09:05:23,062 E 1755621 1755662] core_worker.cc:298: The global worker has already been shutdown. This happens when the language frontend accesses the Ray's worker after it is shutdown. The process will exit
Traceback (most recent call last):
File "/tmp/ipykernel_1755390/1841428437.py", line 1, in <module>
mars.new_ray_session(worker_num=5)
File "/home/panhl/miniconda3/envs/money/lib/python3.9/site-packages/mars/deploy/oscar/ray.py", line 354, in new_ray_session
client = new_cluster_in_ray(**new_cluster_kwargs)
File "/home/panhl/miniconda3/envs/money/lib/python3.9/site-packages/mars/deploy/oscar/ray.py", line 326, in new_cluster_in_ray
client = fut.result()
File "/home/panhl/miniconda3/envs/money/lib/python3.9/concurrent/futures/_base.py", line 445, in result
return self.__get_result()
File "/home/panhl/miniconda3/envs/money/lib/python3.9/concurrent/futures/_base.py", line 390, in __get_result
raise self._exception
File "/home/panhl/miniconda3/envs/money/lib/python3.9/site-packages/mars/deploy/oscar/ray.py", line 319, in new_cluster
raise ex
File "/home/panhl/miniconda3/envs/money/lib/python3.9/site-packages/mars/deploy/oscar/ray.py", line 311, in new_cluster
await cluster.start()
File "/home/panhl/miniconda3/envs/money/lib/python3.9/site-packages/mars/deploy/oscar/ray.py", line 482, in start
worker_pools = await asyncio.gather(
File "/home/panhl/miniconda3/envs/money/lib/python3.9/site-packages/mars/deploy/oscar/ray.py", line 110, in new_worker
return await self._cluster_state_ref.new_worker(worker_address)
File "/home/panhl/miniconda3/envs/money/lib/python3.9/site-packages/mars/oscar/backends/context.py", line 177, in send
return self._process_result_message(result)
File "/home/panhl/miniconda3/envs/money/lib/python3.9/site-packages/mars/oscar/backends/context.py", line 69, in _process_result_message
raise message.error.with_traceback(message.traceback)
File "/home/sys-admin/miniconda3/envs/raycluster/lib/python3.9/site-packages/mars/oscar/backends/pool.py", line 520, in send
File "/home/sys-admin/miniconda3/envs/raycluster/lib/python3.9/site-packages/mars/oscar/backends/pool.py", line 319, in _run_coro
File "/home/sys-admin/miniconda3/envs/raycluster/lib/python3.9/site-packages/mars/oscar/api.py", line 115, in __on_receive__
File "mars/oscar/core.pyx", line 366, in __on_receive__
raise ex
File "mars/oscar/core.pyx", line 338, in mars.oscar.core._BaseActor.__on_receive__
with debug_async_timeout('actor_lock_timeout',
File "mars/oscar/core.pyx", line 341, in mars.oscar.core._BaseActor.__on_receive__
async with self._lock:
File "mars/oscar/core.pyx", line 344, in mars.oscar.core._BaseActor.__on_receive__
result = await result
File "/home/sys-admin/miniconda3/envs/raycluster/lib/python3.9/site-packages/mars/deploy/oscar/ray.py", line 200, in new_worker
File "/home/sys-admin/miniconda3/envs/raycluster/lib/python3.9/site-packages/mars/deploy/oscar/pool.py", line 81, in create_worker_actor_pool
File "/home/sys-admin/miniconda3/envs/raycluster/lib/python3.9/site-packages/mars/oscar/api.py", line 58, in create_actor_pool
File "/home/sys-admin/miniconda3/envs/raycluster/lib/python3.9/site-packages/mars/oscar/backends/ray/backend.py", line 62, in create_actor_pool
RayTaskError: ray::RayMainPool.start() (pid=2433967, ip=172.29.0.89, repr=<mars.oscar.backends.ray.pool.RayMainPool object at 0x7fe3715c9100>)
File "/root/miniconda3/envs/raycluster/lib/python3.9/concurrent/futures/_base.py", line 438, in result
return self.__get_result()
File "/root/miniconda3/envs/raycluster/lib/python3.9/concurrent/futures/_base.py", line 390, in __get_result
raise self._exception
File "/root/miniconda3/envs/raycluster/lib/python3.9/site-packages/mars/oscar/backends/ray/pool.py", line 260, in start
self._actor_pool = await create_actor_pool(
File "/root/miniconda3/envs/raycluster/lib/python3.9/site-packages/mars/oscar/backends/pool.py", line 1231, in create_actor_pool
pool: MainActorPoolType = await pool_cls.create(
File "/root/miniconda3/envs/raycluster/lib/python3.9/site-packages/mars/oscar/backends/pool.py", line 987, in create
processes = await cls.wait_sub_pools_ready(tasks)
File "/root/miniconda3/envs/raycluster/lib/python3.9/site-packages/mars/oscar/backends/ray/pool.py", line 126, in wait_sub_pools_ready
return [await t for t in create_pool_tasks]
File "/root/miniconda3/envs/raycluster/lib/python3.9/site-packages/mars/oscar/backends/ray/pool.py", line 126, in <listcomp>
return [await t for t in create_pool_tasks]
File "/root/miniconda3/envs/raycluster/lib/python3.9/site-packages/mars/oscar/backends/ray/pool.py", line 121, in start_sub_pool
await actor_handle.start.remote()
ray.exceptions.RayActorError: The actor died unexpectedly before finishing this task.
uninstall pymars from every nodes
mars.new_ray_session(worker_num=5)
Put failed:
2022-01-20 09:00:35,124 INFO threading.py:234 -- Exception from actor creation is ignored in destructor. To receive this exception in application code, call a method on the actor reference before its destructor is run.
2022-01-20 09:00:35,124 INFO threading.py:234 -- Exception from actor creation is ignored in destructor. To receive this exception in application code, call a method on the actor reference before its destructor is run.
2022-01-20 09:00:35,125 INFO threading.py:234 -- Exception from actor creation is ignored in destructor. To receive this exception in application code, call a method on the actor reference before its destructor is run.
2022-01-20 09:00:35,125 INFO threading.py:234 -- Exception from actor creation is ignored in destructor. To receive this exception in application code, call a method on the actor reference before its destructor is run.
2022-01-20 09:00:35,126 INFO threading.py:234 -- Exception from actor creation is ignored in destructor. To receive this exception in application code, call a method on the actor reference before its destructor is run.
2022-01-20 09:00:35,126 INFO threading.py:234 -- Exception from actor creation is ignored in destructor. To receive this exception in application code, call a method on the actor reference before its destructor is run.
2022-01-20 09:00:35,127 INFO threading.py:234 -- Exception from actor creation is ignored in destructor. To receive this exception in application code, call a method on the actor reference before its destructor is run.
2022-01-20 09:00:35,127 INFO threading.py:234 -- Exception from actor creation is ignored in destructor. To receive this exception in application code, call a method on the actor reference before its destructor is run.
2022-01-20 09:00:35,127 INFO threading.py:234 -- Exception from actor creation is ignored in destructor. To receive this exception in application code, call a method on the actor reference before its destructor is run.
Traceback (most recent call last):
File "/tmp/ipykernel_1754372/1841428437.py", line 1, in <module>
mars.new_ray_session(worker_num=5)
File "/home/panhl/miniconda3/envs/money/lib/python3.9/site-packages/mars/deploy/oscar/ray.py", line 354, in new_ray_session
client = new_cluster_in_ray(**new_cluster_kwargs)
File "/home/panhl/miniconda3/envs/money/lib/python3.9/site-packages/mars/deploy/oscar/ray.py", line 326, in new_cluster_in_ray
client = fut.result()
File "/home/panhl/miniconda3/envs/money/lib/python3.9/concurrent/futures/_base.py", line 445, in result
return self.__get_result()
File "/home/panhl/miniconda3/envs/money/lib/python3.9/concurrent/futures/_base.py", line 390, in __get_result
raise self._exception
File "/home/panhl/miniconda3/envs/money/lib/python3.9/site-packages/mars/deploy/oscar/ray.py", line 319, in new_cluster
raise ex
File "/home/panhl/miniconda3/envs/money/lib/python3.9/site-packages/mars/deploy/oscar/ray.py", line 311, in new_cluster
await cluster.start()
File "/home/panhl/miniconda3/envs/money/lib/python3.9/site-packages/mars/deploy/oscar/ray.py", line 461, in start
self._supervisor_pool = await create_supervisor_actor_pool(
File "/home/panhl/miniconda3/envs/money/lib/python3.9/site-packages/mars/deploy/oscar/pool.py", line 36, in create_supervisor_actor_pool
return await mo.create_actor_pool(
File "/home/panhl/miniconda3/envs/money/lib/python3.9/site-packages/mars/oscar/api.py", line 58, in create_actor_pool
return await get_backend(scheme).create_actor_pool(
File "/home/panhl/miniconda3/envs/money/lib/python3.9/site-packages/mars/oscar/backends/ray/backend.py", line 51, in create_actor_pool
ray.remote(RayMainPool)
File "/home/panhl/miniconda3/envs/money/lib/python3.9/site-packages/ray/actor.py", line 505, in remote
return actor_cls._remote(
File "/home/panhl/miniconda3/envs/money/lib/python3.9/site-packages/ray/util/tracing/tracing_helper.py", line 371, in _invocation_actor_class_remote_span
return method(self, args, kwargs, *_args, **_kwargs)
File "/home/panhl/miniconda3/envs/money/lib/python3.9/site-packages/ray/actor.py", line 618, in _remote
return client_mode_convert_actor(
File "/home/panhl/miniconda3/envs/money/lib/python3.9/site-packages/ray/_private/client_mode_hook.py", line 193, in client_mode_convert_actor
return client_actor._remote(in_args, in_kwargs, **kwargs)
File "/home/panhl/miniconda3/envs/money/lib/python3.9/site-packages/ray/util/client/common.py", line 225, in _remote
return self.options(**option_args).remote(*args, **kwargs)
File "/home/panhl/miniconda3/envs/money/lib/python3.9/site-packages/ray/util/client/common.py", line 401, in remote
futures = ray.call_remote(self, *args, **kwargs)
File "/home/panhl/miniconda3/envs/money/lib/python3.9/site-packages/ray/util/client/api.py", line 106, in call_remote
return self.worker.call_remote(instance, *args, **kwargs)
File "/home/panhl/miniconda3/envs/money/lib/python3.9/site-packages/ray/util/client/worker.py", line 449, in call_remote
task = instance._prepare_client_task()
File "/home/panhl/miniconda3/envs/money/lib/python3.9/site-packages/ray/util/client/common.py", line 386, in _prepare_client_task
task = self._remote_stub._prepare_client_task()
File "/home/panhl/miniconda3/envs/money/lib/python3.9/site-packages/ray/util/client/common.py", line 236, in _prepare_client_task
self._ensure_ref()
File "/home/panhl/miniconda3/envs/money/lib/python3.9/site-packages/ray/util/client/common.py", line 207, in _ensure_ref
self._ref = ray.worker._put_pickled(
File "/home/panhl/miniconda3/envs/money/lib/python3.9/site-packages/ray/util/client/worker.py", line 407, in _put_pickled
raise cloudpickle.loads(resp.error)
ModuleNotFoundError: No module named 'mars'
I have seen not all mars.version are same in every node , it work good after fit
Thank you, feel free to reach out to us if any issue occurs.
| gharchive/issue | 2022-01-18T15:48:41 | 2025-04-01T06:44:53.863905 | {
"authors": [
"hekaisheng",
"qinxuye",
"zizouqidashen"
],
"repo": "mars-project/mars",
"url": "https://github.com/mars-project/mars/issues/2638",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1117844591 | Gradle build error while removing jcenter
I have got 'Could not find com.arthurivanets.adapster:adapster:1.0.13.' error message while removing jcenter in build.gradle as suggested in android studio.
same issue
add
jcenter()
maven { url 'https://jitpack.io' }
in your settings.graddle
Hey.
The solution is to declare jcenter() repository to your project as of now.
| gharchive/issue | 2022-01-28T21:02:17 | 2025-04-01T06:44:53.867485 | {
"authors": [
"jja08111",
"mars885",
"tomleejumah",
"yenaingoo992"
],
"repo": "mars885/persistent-search-view",
"url": "https://github.com/mars885/persistent-search-view/issues/34",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1425528940 | flaky TestBidirectionalStreamsDataTransfer/server-initiated test
=== RUN TestBidirectionalStreamsDataTransfer/server-initiated
webtransport_test.go:88:
Error Trace: /Users/runner/work/webtransport-go/webtransport-go/webtransport_test.go:88
/Users/runner/work/webtransport-go/webtransport-go/webtransport_test.go:158
Error: Received unexpected error:
timeout: no recent network activity
Test: TestBidirectionalStreamsDataTransfer/server-initiated
--- FAIL: TestBidirectionalStreamsDataTransfer (5.06s)
--- PASS: TestBidirectionalStreamsDataTransfer/client-initiated (0.01s)
--- FAIL: TestBidirectionalStreamsDataTransfer/server-initiated (5.06s)
Apparently this was fixed by #53.
| gharchive/issue | 2022-10-27T12:07:54 | 2025-04-01T06:44:53.881698 | {
"authors": [
"marten-seemann"
],
"repo": "marten-seemann/webtransport-go",
"url": "https://github.com/marten-seemann/webtransport-go/issues/41",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2735355667 | Add the Cobian Reflector application
Please confirm these before moving forward.
[X] I have searched for my feature proposal and have not found a work-in-progress/duplicate/resolved/discarded issue.
[X] This proposal is a completely new feature. If you want to suggest an improvement or an enhancement, please use this template.
Describe the new feature
First of all, I apologize for having published this request in the wrong section but the system doesn't let me publish the request in Pull requests, I don't know why.
The Cobian Reflector application is missing from the UnigetUI repositories that can't be downloaded or updated. I ask the developer if he can add this fantastic program to the UnigetUI repositories.
Describe how this new feature could help users
Cobian Reflector is the best application for creating backups, it is very useful.
The Cobian Reflector application is missing from the UnigetUI repositories that can't be downloaded or updated. I ask the developer if he can add this fantastic program to the UnigetUI repositories.
UniGetUI does not have their own repositories. UniGetUI relies on package managers to dowload and update your software. You may contact the author of Cobian Reflector and ask them to publish their software to any of the compatible package managers, such as WinGet.
First of all, I apologize for having published this request in the wrong section but the system doesn't let me publish the request in Pull requests, I don't know why.
A pull request is a code change you propose to merge into the repository. It has nothing to do with requesting a feature.
| gharchive/issue | 2024-12-12T09:28:20 | 2025-04-01T06:44:53.885585 | {
"authors": [
"Damians0",
"marticliment"
],
"repo": "marticliment/UniGetUI",
"url": "https://github.com/marticliment/UniGetUI/issues/3064",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1660719306 | [BUG/ISSUE] Sharing isn't working for Portuguese
Try changing the language to Portuguese (BR and PT) and then click the share icon. The share menu won't open for me. Changing the language to English seems to fix the problem. I have tried French and it also works.
I translated the string... It should be fixed
Yes, on next lang update this will get fixed.
Those strings can never be changed, otherwise Python fails to parse the string and therefore crashes.
In this case it is a minor crash, but it can get worse if it happens on the main script
| gharchive/issue | 2023-04-10T12:27:54 | 2025-04-01T06:44:53.887239 | {
"authors": [
"marticliment",
"ppvnf"
],
"repo": "marticliment/WingetUI",
"url": "https://github.com/marticliment/WingetUI/issues/897",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
184826341 | question about to use this code with Windows Forms
Hi,
Is is not an issue just a question.
You did a huge work to convert to Xamarin Android several useful apps from pure Android.
Is it possible (and how to do that) to use this code (and others you did) with Xamarin Forms and Xaml ?
Thanks
You can always create a native renderer, but in general this is a StackOverflow question and doesn't belong here.
Sorry to post my question here but I wanted also to know if you have these codes with Windows.Forms and if I had not missed anything. Now I know that you have not.
| gharchive/issue | 2016-10-24T12:26:55 | 2025-04-01T06:44:53.889084 | {
"authors": [
"dg782",
"martijn00"
],
"repo": "martijn00/MaterialDesignHelpers",
"url": "https://github.com/martijn00/MaterialDesignHelpers/issues/1",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
177107086 | Error while changing the number of classes
Hi,
I am trying to train on a different number of classes than mentioned in the example here. I have two classes, so it becomes 2+1(background)=3 classes. I changed the value of NUM_LABEL in run_pascal_strong.sh but it gives error saying "unexpected label 3".
What could be the reason behind it?
PS: I am following all the pre-processing steps as mentioned and they work fine.
Thanks.
Hello,
I am also facing the same issue. I have 2 classes including background. And it keeps on giving me error "unexpected label 147". The label value keeps on changing. I have checked everything I could, and don't see any issue in my prototxt file. But I dont know how to solve this issue.
Any hints?
Hello @aurooj and @nattari,
I had the same problem. Make sure that your label images have values <=NUM_LABEL. That is, if you are using 3 classes (considering background), you label image should have values from 0 (for background) to 3.
Hope this helps,
Cheers.
Hi Alberto,
Thanks for your reply. Yes, I checked my dataset, all it has is 0's and 1's when num_output = 2. I tried printing the gt_label in interp_layer.cpp and it shows that it takes 1 as an input but after this layer, my labels change to value 38, which is weird and then it is obvious that seg_accuracy_layer.cpp throws an error of unexpected_label..
@albertoQD thanks for your reply. i don't think that could be a problem because in deeplab we could choose a sub-set of classes and they may not be contiguous and start from 0. Believe it's something else.
Hi @nattari @aurooj @albertoQD ,I have the same problem about "Unexpected label ***", and the number after “Unpexted label” is always different at every time.
My configuration is as following:
The DeepLab model : DeepLab_LargeFOV.
Dataset: original PASCAL VOC12.
Init model is vgg16_20M.
Solver: exper/voc12/config/DeepLab_LargeFOV/solver_train.prototxt
train_net: exper/voc12/config/DeepLab_LargeFOV/train_train.prototxt
source: exper/voc12/list/train.txt
When I run run_pascal_strong.sh, an error always occur and I can't further proceed. The error is shown below.
…...
I0612 15:44:39.689864 12910 net.cpp:208] This network produces output accuracy
I0612 15:44:39.689891 12910 net.cpp:467] Collecting Learning Rate and Weight Decay.
I0612 15:44:39.689899 12910 net.cpp:219] Network initialization done.
I0612 15:44:39.689903 12910 net.cpp:220] Memory required for data: 3663953152
I0612 15:44:39.689975 12910 solver.cpp:41] Solver scaffolding done.
I0612 15:44:39.689981 12910 solver.cpp:160] Solving DeepLab-LargeFOV
I0612 15:44:39.689985 12910 solver.cpp:161] Learning Rate Policy: step
F0612 15:44:41.198884 12910 softmax_loss_layer.cpp:86] Unexpected label 19
*** Check failure stack trace: ***
@ 0x7efefd561daa (unknown)
@ 0x7efefd561ce4 (unknown)
@ 0x7efefd5616e6 (unknown)
@ 0x7efefd564687 (unknown)
@ 0x5482a8 caffe::SoftmaxWithLossLayer<>::Forward_cpu()
@ 0x47413a caffe::Net<>::ForwardFromTo()
@ 0x4743cf caffe::Net<>::ForwardPrefilled()
@ 0x57a320 caffe::Solver<>::Solve()
@ 0x419be8 train()
@ 0x412278 main
@ 0x7efefa399f45 (unknown)
@ 0x417927 (unknown)
@ (nil) (unknown)
Aborted
I have answered this question here:
https://stackoverflow.com/questions/40370852/unexpected-labal-38-deeplab-error/44530258#44530258
On Jun 12, 2017 11:04 PM, "zhangrui0828" notifications@github.com wrote:
Hi @nattari https://github.com/nattari @aurooj https://github.com/aurooj
@albertoQD https://github.com/albertoqd ,I have the same problem about
"Unexpected label ***", and the number after “Unpexted label” is always
different at every time.
My configuration is as following:
The DeepLab model : DeepLab_LargeFOV.
Dataset: original PASCAL VOC12.
Init model is vgg16_20M.
Solver: exper/voc12/config/DeepLab_LargeFOV/solver_train.prototxt
train_net: exper/voc12/config/DeepLab_LargeFOV/train_train.prototxt
source: exper/voc12/list/train.txt
When I run run_pascal_strong.sh, an error always occur and I can't further
proceed. The error is shown below.
…...
I0612 15:44:39.689864 12910 net.cpp:208] This network produces output
accuracy
I0612 15:44:39.689891 12910 net.cpp:467] Collecting Learning Rate and
Weight Decay.
I0612 15:44:39.689899 12910 net.cpp:219] Network initialization done.
I0612 15:44:39.689903 12910 net.cpp:220] Memory required for data:
3663953152
I0612 15:44:39.689975 12910 solver.cpp:41] Solver scaffolding done.
I0612 15:44:39.689981 12910 solver.cpp:160] Solving DeepLab-LargeFOV
I0612 15:44:39.689985 12910 solver.cpp:161] Learning Rate Policy: step
F0612 15:44:41.198884 12910 softmax_loss_layer.cpp:86] Unexpected label 19
*** Check failure stack trace: ***
@ 0x7efefd561daa (unknown)
@ 0x7efefd561ce4 (unknown)
@ 0x7efefd5616e6 (unknown)
@ 0x7efefd564687 (unknown)
@ 0x5482a8 caffe::SoftmaxWithLossLayer<>::Forward_cpu()
@ 0x47413a caffe::Net<>::ForwardFromTo()
@ 0x4743cf caffe::Net<>::ForwardPrefilled()
@ 0x57a320 caffe::Solver<>::Solve()
@ 0x419be8 train()
@ 0x412278 main
@ 0x7efefa399f45 (unknown)
@ 0x417927 (unknown)
@ (nil) (unknown)
Aborted
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/martinkersner/train-DeepLab/issues/9#issuecomment-307993326,
or mute the thread
https://github.com/notifications/unsubscribe-auth/AOK4XWAaLasoHRnfWJjUb8dgjqY3LvCCks5sDfxagaJpZM4J9kW_
.
@aurooj Thank you for answering my question.
The differents between them are as following:(1) I didn't change the number of classes and used the original PASCAL VOC2012,(2)the error comes from softmax_loss_layer.cpp, not seg_accuracy_layer.cpp,(3) when I run the run_pascal_strong.sh, the value of unexpected label will change at every time.
Thus, where should I change?
Any help will be highly appreciated.
Thanks!
@zhangrui0828 Sorry for being late in replying. So, are you sure your data has labels all from 0 to 20?
@zhangrui0828 Have you transformed the annotations by removing the colormap?
| gharchive/issue | 2016-09-15T07:21:58 | 2025-04-01T06:44:53.964276 | {
"authors": [
"Twsit",
"albertoQD",
"aurooj",
"nattari",
"zhangrui0828"
],
"repo": "martinkersner/train-DeepLab",
"url": "https://github.com/martinkersner/train-DeepLab/issues/9",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
325575029 | Error in Chrome - Refused to set unsafe header "Accept-Encoding"
In Chrome browser I'm getting Refused to set unsafe header "Accept-Encoding" error message every time when a request is processed. According to this StackOverflow question, the Accept-Encoding should not be part of the header. But I'm not sure, what it exactly does and what it can affects. Anyway if I remove the Accept-Encoding in constructor of PubgAPI class in pubg-typescript-api\api\base.js, it works fine and the error is not produced:
/*if (useGzip) {
headers['Accept-Encoding'] = 'gzip';
}*/
Btw. great job, I like this wrapper.
Hi, thanks for reporting.
I added the header to make the server respond with gzipped content, as explained here.
However, it appears that this is a forbidden header name for browsers, but it is fine when sending requests from other sources (like node).
Just to confirm, do you get a gzipped response (does the server reply with Content-Encoding: gzip) after commenting out the Accept-Encoding header?
I will patch it by defaulting useGzip to false, so that it can be set to true when initializing PubgAPI when using node and it stays false if sending requests through a browser.
Hello,
I used the wrapper in fully client side application, so it was used quite unusual way. Expected usage is here server to server, where the Accept-Encoding could be smooth. So my error report was probably misleading. Sorry for that.
The new version #7 (1.3.1) works perfectly w/o error even on client side. It looks like the response is using gzip. Here are the headers for new version:
General
Request URL: https://api.playbattlegrounds.com/shards/pc-eu/matches/41dd024d-9bc5-4b76-b0d8-6c0330d675d3
Request Method: GET
Status Code: 200 (from disk cache)
Remote Address: 54.230.129.45:443
Referrer Policy: no-referrer-when-downgrade
Response headers
access-control-allow-origin: *
access-control-expose-headers: Content-Length
age: 42254
cache-control: max-age=604800
content-encoding: gzip
content-type: application/json
date: Tue, 22 May 2018 19:01:18 GMT
status: 200
vary: Accept-Encoding
via: 1.1 e50082f108f86da8af6ed222cfcad2b5.cloudfront.net (CloudFront)
x-amz-cf-id: CYsmtNcLlE_sHx48lV8jmWPYZwN1n6qWdfeafX_ZcYm4BETcs6nMyA==
x-cache: Hit from cloudfront
x-request-id: 604b0c16-01e3-4638-9357-770e93891270
Request headers
Provisional headers are shown
Accept: application/json
Authorization: Bearer eyJ...KmWKWaM
Origin: http://localhost:4200
Referer: http://localhost:4200/matches
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/66.0.3359.181 Safari/537.36
Thank you.
| gharchive/issue | 2018-05-23T07:08:15 | 2025-04-01T06:44:53.979251 | {
"authors": [
"Cekous",
"martinsileno"
],
"repo": "martinsileno/pubg-typescript-api",
"url": "https://github.com/martinsileno/pubg-typescript-api/issues/6",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1337313261 | [WIP] [Blocked] Secure Sort implementation
[WIP]
Implementing Radix Sort using the approach described in https://eprint.iacr.org/2019/695.pdf
Before calling mpc_sort function, caller is expected to transform the bits into modulus conversion and send secret shared inputs to the function.
The entire flow of the implementation is represented in this diagram.
This diff implements finding order of each input in the sorted list.
For now this diff is blocked on executing multiple secure multiplication in parallel. I am parking this PR for using later once those changes are made. The test in this PR fails due to parallelization not handled
This is incorporated in #90 . Closing this one
| gharchive/pull-request | 2022-08-12T14:55:58 | 2025-04-01T06:44:53.982162 | {
"authors": [
"richajaindce"
],
"repo": "martinthomson/raw-ipa",
"url": "https://github.com/martinthomson/raw-ipa/pull/76",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
189446943 | Patch replace ota version with checksum
Implements #220
Note that, if OTA fails, we don't need to restart anymore just for resetting the expected MD5 because now there WILL be another SetMD5() when the server retries the update.
Updated bash script:
#!/bin/bash
DEVICE=homie/<device>
BROKER="-h <broker>"
FW_BIN=/path/to/firmware.bin
FW_MD5=$(md5sum -b $FW_BIN | cut -d" " -f1)
FW_B64=${FW_BIN/.bin/.b64}
base64 -w0 $FW_BIN > $FW_B64
mosquitto_pub $BROKER -t "$DEVICE/\$implementation/ota/checksum" -m $FW_MD5
mosquitto_pub $BROKER -t "$DEVICE/\$implementation/ota/firmware" -f $FW_B64
I added a second commit to be slightly more consistent with icons and OTA message grammar.
😅 Hooray! Lint passed straight away.
Good job! And congrats for the lint, you're becoming a master 😄
| gharchive/pull-request | 2016-11-15T16:54:27 | 2025-04-01T06:44:54.002959 | {
"authors": [
"marvinroger",
"mrpace2"
],
"repo": "marvinroger/homie-esp8266",
"url": "https://github.com/marvinroger/homie-esp8266/pull/221",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
277613173 | unknown type Concat
caffe2darknet.py cannot parse concat layer
@marvis
I have the same problem
| gharchive/issue | 2017-11-29T02:59:39 | 2025-04-01T06:44:54.004234 | {
"authors": [
"azuryl",
"lincolnhard",
"vinjn"
],
"repo": "marvis/pytorch-caffe-darknet-convert",
"url": "https://github.com/marvis/pytorch-caffe-darknet-convert/issues/21",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
2322370884 | 🛑 POH events bot from render hosting is down
In ec379b2, POH events bot from render hosting (https://poh-events-bot.onrender.com/ping) was down:
HTTP code: 0
Response time: 0 ms
Resolved: POH events bot from render hosting is back up in c1074ac after 15 minutes.
| gharchive/issue | 2024-05-29T05:12:50 | 2025-04-01T06:44:54.017353 | {
"authors": [
"masch"
],
"repo": "masch/poh-events-uptime",
"url": "https://github.com/masch/poh-events-uptime/issues/2870",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1604965697 | New address format
Context
Update the wallet plugin with new wallet format
User flow
Describe the user flow using user stories so the end result is super clear - PM
How to
List the step-by-step to get it do if needed - PM
Technical details
Give the technical insights so anyone in the team can tackle the tasks - Dev
QA testing
Does this task require some QA tests ?
If yes, explain how to validate it
@Thykof could you help me to test this on the wallet plugin side please?
I would like to install the plugin using the front-end (aka installing the plugin manually rather then cloning the repo) if that makes sense...
Cheers :)
| gharchive/issue | 2023-03-01T13:20:03 | 2025-04-01T06:44:54.024963 | {
"authors": [
"SebastienValla",
"SlnPons"
],
"repo": "massalabs/thyra-plugin-wallet",
"url": "https://github.com/massalabs/thyra-plugin-wallet/issues/38",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1743782627 | wails: update backup screen
We are updating backup screen.
Codecov Report
Patch coverage: 100.00% and no project coverage change.
Comparison is base (4308764) 41.54% compared to head (cdd9374) 41.54%.
:exclamation: Current head cdd9374 differs from pull request most recent head ee9635b. Consider uploading reports for the commit ee9635b to get more accurate results
:exclamation: Your organization is not using the GitHub App Integration. As a result you may experience degraded service beginning May 15th. Please install the Github App Integration for your organization. Read more.
Additional details and impacted files
@@ Coverage Diff @@
## main #340 +/- ##
=======================================
Coverage 41.54% 41.54%
=======================================
Files 15 15
Lines 1047 1047
=======================================
Hits 435 435
Misses 569 569
Partials 43 43
Impacted Files
Coverage Δ
internal/handler/wallet/backup.go
49.43% <100.00%> (ø)
:umbrella: View full report in Codecov by Sentry.
:loudspeaker: Do you have feedback about the report comment? Let us know in this issue.
| gharchive/pull-request | 2023-06-06T12:36:31 | 2025-04-01T06:44:54.031816 | {
"authors": [
"codecov-commenter",
"maxime-carabina"
],
"repo": "massalabs/thyra-plugin-wallet",
"url": "https://github.com/massalabs/thyra-plugin-wallet/pull/340",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1244387827 | 🛑 RESTREAM-IR-12-ZirSakht-3 is down
In 4415282, RESTREAM-IR-12-ZirSakht-3 ($IR_12) was down:
HTTP code: 0
Response time: 0 ms
Resolved: RESTREAM-IR-12-ZirSakht-3 is back up in a2d342c.
| gharchive/issue | 2022-05-22T22:44:02 | 2025-04-01T06:44:54.045476 | {
"authors": [
"masterking32"
],
"repo": "masterking32/masterstream_uptime",
"url": "https://github.com/masterking32/masterstream_uptime/issues/409",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1371816886 | 🛑 Iran 12 - ParsOnline 5 is down
In efc1cf3, Iran 12 - ParsOnline 5 ($IR_12) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Iran 12 - ParsOnline 5 is back up in fba97ff.
| gharchive/issue | 2022-09-13T17:58:46 | 2025-04-01T06:44:54.047864 | {
"authors": [
"masterking32"
],
"repo": "masterking32/masterstream_uptime",
"url": "https://github.com/masterking32/masterstream_uptime/issues/8362",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1438659772 | 🛑 Iran 10 - Zirsakht 3 is down
In eeee9a9, Iran 10 - Zirsakht 3 ($IR_10) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Iran 10 - Zirsakht 3 is back up in 93141a5.
| gharchive/issue | 2022-11-07T16:43:57 | 2025-04-01T06:44:54.050055 | {
"authors": [
"masterking32"
],
"repo": "masterking32/masterstream_uptime",
"url": "https://github.com/masterking32/masterstream_uptime/issues/9425",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1441321465 | 🛑 Iran 10 - Zirsakht 3 is down
In 7722492, Iran 10 - Zirsakht 3 ($IR_10) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Iran 10 - Zirsakht 3 is back up in 00018ac.
| gharchive/issue | 2022-11-09T03:49:48 | 2025-04-01T06:44:54.052256 | {
"authors": [
"masterking32"
],
"repo": "masterking32/masterstream_uptime",
"url": "https://github.com/masterking32/masterstream_uptime/issues/9463",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
424878775 | 99.9% of my packages could not be found, but they exist in CDN
Is this suppose to be like this?
only plugin that was replaced by cdn was react-router-dom
❌ 'lodash' couldn't be found, please add it to https://github.com/mastilver/module-to-cdn/blob/master/modules.json
but
https://unpkg.com/lodash@4.17.11/ exists
'redux-form' couldn't be found, please add it to https://github.com/mastilver/module-to-cdn/blob/master/modules.json
❌ 'dayjs' couldn't be found, please add it to https://github.com/mastilver/module-to-cdn/blob/master/modules.json
❌ '@babel/runtime/helpers/inherits' couldn't be found, please add it to https://github.com/mastilver/module-to-cdn/blob/master/modules.json
❌ '@babel/runtime/helpers/getPrototypeOf' couldn't be found, please add it to https://github.com/mastilver/module-to-cdn/blob/master/modules.json
❌ '@babel/runtime/helpers/possibleConstructorReturn' couldn't be found, please add it to https://github.com/mastilver/module-to-cdn/blob/master/modules.json
❌ '@babel/runtime/helpers/createClass' couldn't be found, please add it to https://github.com/mastilver/module-to-cdn/blob/master/modules.json
❌ 'eventemitter3' couldn't be found, please add it to https://github.com/mastilver/module-to-cdn/blob/master/modules.json
❌ 'prop-types' couldn't be found, please add it to https://github.com/mastilver/module-to-cdn/blob/master/modules.json
❌ 'scriptjs' couldn't be found, please add it to https://github.com/mastilver/module-to-cdn/blob/master/modules.json
❌ '@mapbox/point-geometry' couldn't be found, please add it to https://github.com/mastilver/module-to-cdn/blob/master/modules.json
❌ 'babel-runtime/core-js/object/get-prototype-of' couldn't be found, please add it to https://github.com/mastilver/module-to-cdn/blob/master/modules.json
❌ 'babel-runtime/helpers/classCallCheck' couldn't be found, please add it to https://github.com/mastilver/module-to-cdn/blob/master/modules.json
❌ 'babel-runtime/helpers/createClass' couldn't be found, please add it to https://github.com/mastilver/module-to-cdn/blob/master/modules.json
❌ 'babel-runtime/helpers/possibleConstructorReturn' couldn't be found, please add it to https://github.com/mastilver/module-to-cdn/blob/master/modules.json
❌ 'babel-runtime/helpers/inherits' couldn't be found, please add it to https://github.com/mastilver/module-to-cdn/blob/master/modules.json
❌ 'react-lifecycles-compat' couldn't be found, please add it to https://github.com/mastilver/module-to-cdn/blob/master/modules.json
❌ 'prop-types' couldn't be found, please add it to https://github.com/mastilver/module-to-cdn/blob/master/modules.json
❌ 'babel-runtime/helpers/extends' couldn't be found, please add it to https://github.com/mastilver/module-to-cdn/blob/master/modules.json
❌ 'babel-runtime/core-js/object/get-prototype-of' couldn't be found, please add it to https://github.com/mastilver/module-to-cdn/blob/master/modules.json
❌ 'babel-runtime/helpers/classCallCheck' couldn't be found, please add it to https://github.com/mastilver/module-to-cdn/blob/master/modules.json
❌ 'babel-runtime/helpers/createClass' couldn't be found, please add it to https://github.com/mastilver/module-to-cdn/blob/master/modules.json
❌ 'babel-runtime/helpers/possibleConstructorReturn' couldn't be found, please add it to https://github.com/mastilver/module-to-cdn/blob/master/modules.json
❌ 'babel-runtime/helpers/inherits' couldn't be found, please add it to https://github.com/mastilver/module-to-cdn/blob/master/modules.json
❌ 'prop-types' couldn't be found, please add it to https://github.com/mastilver/module-to-cdn/blob/master/modules.json
❌ 'babel-runtime/core-js/object/keys' couldn't be found, please add it to https://github.com/mastilver/module-to-cdn/blob/master/modules.json
❌ 'babel-runtime/helpers/classCallCheck' couldn't be found, please add it to https://github.com/mastilver/module-to-cdn/blob/master/modules.json
❌ 'babel-runtime/helpers/createClass' couldn't be found, please add it to https://github.com/mastilver/module-to-cdn/blob/master/modules.json
❌ 'babel-runtime/helpers/possibleConstructorReturn' couldn't be found, please add it to https://github.com/mastilver/module-to-cdn/blob/master/modules.json
❌ 'babel-runtime/core-js/object/get-prototype-of' couldn't be found, please add it to https://github.com/mastilver/module-to-cdn/blob/master/modules.json
❌ 'babel-runtime/helpers/inherits' couldn't be found, please add it to https://github.com/mastilver/module-to-cdn/blob/master/modules.json
❌ 'prop-types' couldn't be found, please add it to https://github.com/mastilver/module-to-cdn/blob/master/modules.json
❌ 'babel-runtime/helpers/objectWithoutProperties' couldn't be found, please add it to https://github.com/mastilver/module-to-cdn/blob/master/modules.json
❌ 'babel-runtime/helpers/extends' couldn't be found, please add it to https://github.com/mastilver/module-to-cdn/blob/master/modules.json
❌ 'babel-runtime/core-js/object/get-prototype-of' couldn't be found, please add it to https://github.com/mastilver/module-to-cdn/blob/master/modules.json
❌ 'babel-runtime/helpers/createClass' couldn't be found, please add it to https://github.com/mastilver/module-to-cdn/blob/master/modules.json
❌ 'babel-runtime/helpers/possibleConstructorReturn' couldn't be found, please add it to https://github.com/mastilver/module-to-cdn/blob/master/modules.json
❌ 'babel-runtime/helpers/inherits' couldn't be found, please add it to https://github.com/mastilver/module-to-cdn/blob/master/modules.json
❌ 'react-lifecycles-compat' couldn't be found, please add it to https://github.com/mastilver/module-to-cdn/blob/master/modules.json
❌ 'prop-types' couldn't be found, please add it to https://github.com/mastilver/module-to-cdn/blob/master/modules.json
❌ 'babel-runtime/helpers/classCallCheck' couldn't be found, please add it to https://github.com/mastilver/module-to-cdn/blob/master/modules.json
❌ 'babel-runtime/helpers/extends' couldn't be found, please add it to https://github.com/mastilver/module-to-cdn/blob/master/modules.json
❌ 'babel-runtime/core-js/object/get-own-property-descriptor' couldn't be found, please add it to https://github.com/mastilver/module-to-cdn/blob/master/modules.json
❌ 'babel-runtime/helpers/classCallCheck' couldn't be found, please add it to https://github.com/mastilver/module-to-cdn/blob/master/modules.json
....
Hey @rendomnet,
Thanks for filing the issue. This plugin uses mastilver/module-to-cdn to find out if the CDN version of a module exists.
In this case, these libraries should exist in this json file.
If you're interested, you can create a PR to add these to that file.
Hope that helps,
@aulisius.
@rendomnet I had the same issue. For everyone who came here from web-search, here is my recipe which works like a charm for me: I've added custom unpkg-based resolver to webpack.config.js, see below.
plugins: [
new DynamicCdnWebpackPlugin({
verbose: true,
resolver: (packageName, packageVersion, options) => {
return {
name: packageName.replace('@', ''),
var: packageName.replace('@', ''),
version: packageVersion,
url: `https://unpkg.com/${packageName}@${packageVersion}`,
};
},
}),
],
And that's the output (well, piece of):
> "node_modules/.bin/webpack" --mode=production
✔️ 'accounting' will be served by https://unpkg.com/accounting@0.4.1
✔️ 'jquery' will be served by https://unpkg.com/jquery@3.4.1
✔️ 'jquery-ui/ui/widgets/progressbar' will be served by https://unpkg.com/jquery-ui/ui/widgets/progressbar@1.12.1
✔️ 'shopify/app-bridge-utils' will be served by https://unpkg.com/@shopify/app-bridge-utils@1.28.0
✔️ 'url-search-params-polyfill' will be served by https://unpkg.com/url-search-params-polyfill@8.1.0
✔️ 'shopify/app-bridge' will be served by https://unpkg.com/@shopify/app-bridge@1.28.0
✔️ 'sentry/browser' will be served by https://unpkg.com/@sentry/browser@5.22.3
✔️ 'sentry/integrations' will be served by https://unpkg.com/@sentry/integrations@5.11.1
...
And here's the content of manifest.json:
"accounting.1": "https://unpkg.com/accounting@0.4.1",
"jquery.1": "https://unpkg.com/jquery@3.4.1",
"jquery-ui/ui/widgets/progressbar.1": "https://unpkg.com/jquery-ui/ui/widgets/progressbar@1.12.1",
"@shopify/app-bridge-utils.0": "https://unpkg.com/@shopify/app-bridge-utils@1.28.0",
"url-search-params-polyfill.0": "https://unpkg.com/url-search-params-polyfill@8.1.0",
"@shopify/app-bridge.0": "https://unpkg.com/@shopify/app-bridge@1.28.0",
"@sentry/browser.3": "https://unpkg.com/@sentry/browser@5.22.3",
"@sentry/integrations.1": "https://unpkg.com/@sentry/integrations@5.11.1"
Don't know why there are numeric postfixes - but it looks working, I don't mind)
Hope it helps.
| gharchive/issue | 2019-03-25T12:36:19 | 2025-04-01T06:44:54.078565 | {
"authors": [
"PatchRanger",
"aulisius",
"rendomnet"
],
"repo": "mastilver/dynamic-cdn-webpack-plugin",
"url": "https://github.com/mastilver/dynamic-cdn-webpack-plugin/issues/56",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
855079933 | Add BdvViewCreatedListener
This PR adds the following functionalities to org.mastodon.mamut.WindowManager.
Classes that implement BdvViewCreatedListener get a notification when a new MamutViewBdv instance is created.
getBdvWindows() exposes currently open BigDataViewer windows.
The intention (notify about new MamutViewBdv) is fine.
Please use Listeners.List<BdvViewCreatedListener> for the member storing the listeners list and Listeners<BdvViewCreatedListener> for returning a Listeners handle where listeners can be un/registered (There should be one bdvViewCreatedListeners() method instead of methods for adding/removing).
The field should not simply be named listeners. I can already see the PR for adding TrackSchemeViewCreatedListener, and then that would be not specific enough.
See for example FeatureTable for the pattern to use
https://github.com/mastodon-sc/mastodon/blob/master/src/main/java/org/mastodon/feature/ui/FeatureTable.java#L141
https://github.com/mastodon-sc/mastodon/blob/master/src/main/java/org/mastodon/feature/ui/FeatureTable.java#L186
https://github.com/mastodon-sc/mastodon/blob/master/src/main/java/org/mastodon/feature/ui/FeatureTable.java#L299-L302
👍
Could you reopen this PR? (or should I create a new PR)
I noticed that bdvCreatedListeners should be reset in setAppModel() instead of the constructor. With this change, bdvCreatedListeners will no longer accumulate every time you load/create a project.
https://github.com/elephant-track/mastodon/commit/d2348e7d4f83a4764f03c4b44abf073f4d0c701d
It seems wrong to clear registered listeners in setAppModel().
Can you explain your problem in more detail? Probably there is a better way to solve it.
Thanks. I figured out that I needed to remove the listener on close of a BDV window.
The following change in my plugin solved the problem. bdvCreatedListeners does not need to be reset in setAppModel().
https://github.com/elephant-track/elephant-client/commit/bc494df6e3237af33fe7a703fd58e20ff16b5379
The problem was that the previous listener would remain after loading/creating a project, but the above change solved that problem.
The above change was also wrong as you pointed out here https://github.com/elephant-track/elephant-client/commit/bc494df6e3237af33fe7a703fd58e20ff16b5379#r52880240
I hope I get the point, this time.
https://github.com/elephant-track/elephant-client/commit/ca6944bdb3c41a9b0d9fb271e6c271d7d274f904
Thank you for your kind instructions.
| gharchive/pull-request | 2021-04-10T14:30:09 | 2025-04-01T06:44:54.087512 | {
"authors": [
"ksugar",
"tpietzsch"
],
"repo": "mastodon-sc/mastodon",
"url": "https://github.com/mastodon-sc/mastodon/pull/129",
"license": "BSD-2-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
2399266474 | idk what to fill in the builder
i dont know what it wants me to fill out
what is a registry and what name do i need
what folder? he one i saved the download? a new one?
and the name ?????
Then why are you even here building it
Maybe try to read the fucking wiki before opening a ISSUE?
| gharchive/issue | 2024-07-09T21:40:49 | 2025-04-01T06:44:54.144531 | {
"authors": [
"Ghostlogss",
"iCronic",
"toxictager"
],
"repo": "mategol/PySilon-malware",
"url": "https://github.com/mategol/PySilon-malware/issues/529",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2641638717 | How to work with another /basepath?
I'm serving my app from a base path of /myapp so looking to set up routes and matchers that are always relative to that. I can't quite get the hang of the setup.
<script>
const routes = [
{
path: `/`,
component: Home,
},
{
path: `new`,
component: New,
}
];
function debug(route) {
console.log(route);
return route;
}
</script>
<a use:route href={`/`}>Home</a>
<a use:route href={`/new`}>New</a>
<Router base="/" {routes} pre={debug} />
If I do the above, all good. But my base in vite.config.js is set to "/myapp".
How would I amend the above to accommodate the basepath and also match the routes? I've tried all manner of combinations of putting the myapp prefix in the paths in the routes array, the base prop on the . Is there any info on doing this that I've missed?
Many thanks!
I'm also just using use vite and I'm not why the content does not change when I click a url. @wa-rren-dev did you manage to find a fix?
my issue was that i was doing:
const routes: Route[] = [
{
path: '/swings',
component: SwingStrategies
}
]
instaed i should have done
const routes: Route[] = [
{
path: 'swings',
component: SwingStrategies
}
]
I'm also just using use vite and I'm not why the content does not change when I click a url. @wa-rren-dev did you manage to find a fix?
I didn't find a fix! Ended up stepping down to Svelte 4 for a few reasons.
I got ya covered @wa-rren-dev! See #9.
Thanks for the feedback and the support <3
I'm also just using use vite and I'm not why the content does not change when I click a url. @wa-rren-dev did you manage to find a fix?
I didn't find a fix! Ended up stepping down to Svelte 4 for a few reasons.
Please come back to the light of svelte 5!!!
@mateothegreat Amazing thank you! Was thinking of porting to typescript so I'll give this a go when I do
@mateothegreat Amazing thank you! Was thinking of porting to typescript so I'll give this a go when I do
No problem man, you're welcome! Ping me and let me know how it goes!!
| gharchive/issue | 2024-11-07T16:57:11 | 2025-04-01T06:44:54.152762 | {
"authors": [
"mateothegreat",
"popconst",
"wa-rren-dev"
],
"repo": "mateothegreat/svelte5-router",
"url": "https://github.com/mateothegreat/svelte5-router/issues/7",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
611282501 | getSelectedValues only return previous clicked values
Expected Behavior
https://materializecss.com/select.html
Say we have 3 options as described in the above link. Here are my actions and return value
# Click
Option clicked
Expected value
return from instance.getSelectedValues()
return from e.target.value
1
Option 1
Option 1
``
Option 1
2
Option 2
Option 2
Option 1
Option 2
3
Option 3
Option 3
Option 2
Option 3
4
Option 1
Option 1
Option 3
Option 1
When instance.getSelectedValues() is called, it always return previous clicked values.
It should return the current clicked value
Current Behavior
See the return from column 4
Steps to Reproduce (for bugs)
Add the below code to the select onChange
console.log(instance.getSelectedValues());
console.log(e.target.value);
Context
Your Environment
Version used: v1.0.0
Browser Name and version: Chrome
Operating System and version (desktop or mobile): Mac desktop
Original by @weasteam
if multiple is provided, getSelectedValues () works correctly.
Original by @weasteam
| gharchive/issue | 2020-05-02T21:57:56 | 2025-04-01T06:44:54.180355 | {
"authors": [
"DanielRuf"
],
"repo": "materializecss/materialize",
"url": "https://github.com/materializecss/materialize/issues/138",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1066908286 | Deployment not working
It seems the standard deployment is not working any more (see this job).
Are there any updates on this or resources I can read to figure out how to update this @elsapassaro?
Sorry I don't know, pinging @dou-du who is managing the dokku servers and might have more info
It seems it's still the case (not working) latest CD job.
It might be that the host URL has changed?
This has been resolved from the side of connecting via SINTEF, i.e., this issue now represents updating the CD workflow from GitHub to work via the SINTEF network.
I think this one is resolved.
| gharchive/issue | 2021-11-30T07:57:05 | 2025-04-01T06:44:54.183088 | {
"authors": [
"CasperWA",
"elsapassaro",
"unkcpz"
],
"repo": "materialscloud-org/tools-optimade-client",
"url": "https://github.com/materialscloud-org/tools-optimade-client/issues/87",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
562034377 | Install TagBot as a GitHub Action
You're receiving this pull request because the now-deprecated Julia TagBot GitHub App is installed for this repository. This pull request installs TagBot as a GitHub Action.
If this PR does not belong in this repository for any reason, feel free to close it.
See this Discourse thread for more details and to ask questions.
Please do not comment on this pull request, because it is unlikely that I (@christopher-dG) will see it.
Codecov Report
Merging #9 into master will increase coverage by 0.86%.
The diff coverage is n/a.
@@ Coverage Diff @@
## master #9 +/- ##
==========================================
+ Coverage 71.05% 71.91% +0.86%
==========================================
Files 9 9
Lines 463 495 +32
==========================================
+ Hits 329 356 +27
- Misses 134 139 +5
Impacted Files
Coverage Δ
src/ssubarray.jl
46% <0%> (-1.59%)
:arrow_down:
src/broadcast.jl
77.19% <0%> (-1.24%)
:arrow_down:
src/HybridArrays.jl
98% <0%> (+0.08%)
:arrow_up:
src/indexing.jl
84.35% <0%> (+0.38%)
:arrow_up:
src/linalg.jl
79.41% <0%> (+1.28%)
:arrow_up:
src/convert.jl
62.5% <0%> (+4.43%)
:arrow_up:
src/abstractarray.jl
90% <0%> (+15%)
:arrow_up:
Continue to review full report at Codecov.
Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 00f98ea...4ab0429. Read the comment docs.
| gharchive/pull-request | 2020-02-08T15:28:33 | 2025-04-01T06:44:54.204111 | {
"authors": [
"JuliaTagBot",
"codecov-io"
],
"repo": "mateuszbaran/HybridArrays.jl",
"url": "https://github.com/mateuszbaran/HybridArrays.jl/pull/9",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1561395633 | new UI idea
@Hafiz408 can we try this one ?
https://www.youtube.com/watch?v=T33NN_pPeNI
Looks great !! Lets work on this in the future
@Hafiz408 can we try this one ? https://www.youtube.com/watch?v=T33NN_pPeNI
I need to contact you brother
| gharchive/issue | 2023-01-29T18:02:20 | 2025-04-01T06:44:54.231182 | {
"authors": [
"Hafiz408",
"cpushalman",
"mathanamathav"
],
"repo": "mathanamathav/bunker-api",
"url": "https://github.com/mathanamathav/bunker-api/issues/6",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
519934190 | multiple definition of `__vector_19'
Very often, when I try to compile the library, I got this kind of error.
**HardwareSerial0.cpp.o (symbol from plugin): In function `Serial':
(.text+0x0): multiple definition of __vector_18' libraries/DMXSerial-master/DMXSerial.cpp.o (symbol from plugin):(.text+0x0): first defined here HardwareSerial0.cpp.o (symbol from plugin): In function Serial':
(.text+0x0): multiple definition of `__vector_19'
libraries/DMXSerial-master/DMXSerial.cpp.o (symbol from plugin):(.text+0x0): first defined here
collect2: error: ld returned 1 exit status**
You can use the Serial interface either for DMX or for Serial.print(…) but not both in the same script.
The Serial interface of the UNO chip is required to have a robust DMX signal and still enough power left for other processings.
There are some options you can use. They are explained in one of the posts linked at the end of the documentation:
http://mathertel.blogspot.com/2013/05/update-for-dmxserial-library.html
Thank you for answer. But I don’t use serial interference in my code. I’ve used, but before use DMX Serial library I commented Serial. initializations.
Removing the initialization only is not enough, you have to remove all references.
But it may be even hidden: Have a look at https://github.com/mathertel/DMXSerial/issues/14 : There are libraries (here the SD library) that have a hard reference to the Serial built-in.
Hm, It's very interesting. Is there any other library that have strong references to the Serial interface ?
In my my project I use libraries that work with network:
#include <Artnet.h>
#include <Ethernet.h>
#include <Udp.h>
#include <EEPROM.h>
| gharchive/issue | 2019-11-08T10:26:43 | 2025-04-01T06:44:54.236314 | {
"authors": [
"JLightMedia",
"mathertel"
],
"repo": "mathertel/DMXSerial",
"url": "https://github.com/mathertel/DMXSerial/issues/33",
"license": "bsd-3-clause",
"license_type": "permissive",
"license_source": "bigquery"
} |
80172970 | greaterThanOrEqual and lessThanOrEqual give incorrect values and have no tests
Just stumbled upon this when writing tests for my own project.
greaterThanOrEqual() and lessThanOrEqual() give the inverse results to what you would expect.
Given compare():
public function compare(Money $other)
{
$this->assertSameCurrency($other);
if ($this->amount < $other->amount) {
return -1;
} elseif ($this->amount == $other->amount) {
return 0;
} else {
return 1;
}
}
Then we'd expect gtoe:
public function greaterThanOrEqual(Money $other)
{
return 0 <= $this->compare($other);
}
and ltoe:
public function lessThanOrEqual(Money $other)
{
return 0 >= $this->compare($other);
}
Instead, actual gtoe:
/**
* @param \Money\Money $other
* @return bool
*/
public function greaterThanOrEqual(Money $other)
{
return 0 >= $this->compare($other);
}
actual ltoe:
/**
* @param \Money\Money $other
* @return bool
*/
public function lessThanOrEqual(Money $other)
{
return 0 <= $this->compare($other);
}
Additionally, there are no tests for this behaviour, which is likely the only reason this was allowed to happen at all:
public function testComparison()
{
$euro1 = new Money(1, new Currency('EUR'));
$euro2 = new Money(2, new Currency('EUR'));
$usd = new Money(1, new Currency('USD'));
$this->assertTrue($euro2->greaterThan($euro1));
$this->assertFalse($euro1->greaterThan($euro2));
$this->assertTrue($euro1->lessThan($euro2));
$this->assertFalse($euro2->lessThan($euro1));
$this->assertEquals(-1, $euro1->compare($euro2));
$this->assertEquals(1, $euro2->compare($euro1));
$this->assertEquals(0, $euro1->compare($euro1));
}
https://github.com/mathiasverraes/money/pull/100
Fixed in #100 and #130
| gharchive/issue | 2015-05-24T14:13:35 | 2025-04-01T06:44:54.239989 | {
"authors": [
"sagikazarmark",
"thedavidmeister"
],
"repo": "mathiasverraes/money",
"url": "https://github.com/mathiasverraes/money/issues/99",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
2336481955 | 🛑 Gestor Applicados21 is down
In e3c5655, Gestor Applicados21 (https://applicados21.e-valuados.com) was down:
HTTP code: 502
Response time: 65 ms
Resolved: Gestor Applicados21 is back up in 167709c after 7 minutes.
| gharchive/issue | 2024-06-05T17:44:34 | 2025-04-01T06:44:54.245657 | {
"authors": [
"matiasmir"
],
"repo": "matiasmir/status",
"url": "https://github.com/matiasmir/status/issues/209",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1446631424 | 🛑 Gestor Applicados21 is down
In 2b8d1ce, Gestor Applicados21 (https://applicados21.e-valuados.com) was down:
HTTP code: 502
Response time: 127 ms
Resolved: Gestor Applicados21 is back up in ad181b6.
| gharchive/issue | 2022-11-12T22:33:45 | 2025-04-01T06:44:54.248118 | {
"authors": [
"matiasmir"
],
"repo": "matiasmir/status",
"url": "https://github.com/matiasmir/status/issues/31",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1536861061 | 🛑 Wikipedia is down
In f4efc0d, Wikipedia (https://en.wikipedia.org) was down:
HTTP code: 500
Response time: 324 ms
Resolved: Wikipedia is back up in fee5e8b.
| gharchive/issue | 2023-01-17T18:42:25 | 2025-04-01T06:44:54.253591 | {
"authors": [
"maticorv"
],
"repo": "maticorv/upptime",
"url": "https://github.com/maticorv/upptime/issues/11",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
342402000 | Pressing power button causes a blank figure
Pressing the power button (toggle interaction) causes the whole figure to become blank, instead of just disabling interaction. Difficult to reproduce this every time. Tried with both Firefox and Chrome, and in some cases (mostly Firefox), the button worked as intended. In some cases the plot line is still visible, but the axes and labels disappear.
Tested on jupyter lab. My versions are the following:
matplotlib 2.2.2 py36_1 conda-forge
ipympl 0.2.1 py36_0 conda-forge
jupyter_client 5.2.3 py_1 conda-forge
jupyter_core 4.4.0 py_0 conda-forge
jupyterlab 0.32.1 py36_0 conda-forge
jupyterlab_launcher 0.10.5 py36_0 conda-forge
@jupyter-widgets/jupyterlab-manager
@jupyter-widgets/jupyterlab-manager v0.35.0 enabled OK
Firefox 61.0.1
Chrome 67.0.3396.99
I confirm this bug and that it's also difficult to reproduce.
I get this from time to time as well, at the same time the save button generates a blank png file.
| gharchive/issue | 2018-07-18T16:25:45 | 2025-04-01T06:44:54.354672 | {
"authors": [
"david-waterworth",
"hadim",
"tiagopereira"
],
"repo": "matplotlib/jupyter-matplotlib",
"url": "https://github.com/matplotlib/jupyter-matplotlib/issues/61",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
988751060 | do not commit logs
adding default log files to .gitignore
Tracked by: https://gitlab.com/mx-puppet/discord/mx-puppet-discord/-/merge_requests/170
| gharchive/pull-request | 2021-09-06T04:27:14 | 2025-04-01T06:44:54.386957 | {
"authors": [
"MRAAGH",
"dsonck92"
],
"repo": "matrix-discord/mx-puppet-discord",
"url": "https://github.com/matrix-discord/mx-puppet-discord/pull/170",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1336180043 | Fix Complement hiding errors when the blueprint and image fails to build
Fix Complement hiding errors when the blueprint and image fails to build
Before:
federation_room_messages_test.go:14: Deploy: Deploy returned error Deploy: No images have been built for blueprint perf_many_messages
After:
federation_room_messages_test.go:14: Deploy: Failed to construct blueprint: ConstructBlueprintIfNotExist(perf_many_messages): failed to build image: errors whilst constructing blueprint perf_many_messages: [perf_many_messages.hs1 : request http://localhost:57273/_matrix/client/v3/rooms/%21TjBoHVjQldZBcmXRwW:hs1/send/m.room.message/1?access_token=syt_dXNlcl8w_bWvqKHmRcueZttMyINGR_1daNA4&server_name= returned HTTP 403 Forbidden : {"errcode":"M_FORBIDDEN","error":"User @user_0:hs1 not in room !TjBoHVjQldZBcmXRwW:hs1 (None)"} terminated]
Looks like this was fixed in https://github.com/matrix-org/complement/pull/436/files
| gharchive/pull-request | 2022-08-11T16:07:51 | 2025-04-01T06:44:54.395466 | {
"authors": [
"MadLittleMods"
],
"repo": "matrix-org/complement",
"url": "https://github.com/matrix-org/complement/pull/446",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1096617629 | Event::depth does not specify if negative values are (dis)allowed
Link to problem area: https://spec.matrix.org/v1.1/rooms/v1/#event-format
Issue
Required: The maximum depth of the prev_events, plus one. Must be less than the maximum value for an integer (2^63 - 1). If the room’s depth is already at the limit, the depth must be set to the limit.
This does not specify if negative values are (dis)allowed.
Convention is that depth starts from 0, as it's an increasing/adding counter only going upwards.
This has stateres consequences only in v1, but it is also defined in other room versions.
ruma defines this as an unsigned int variant.
https://spec.matrix.org/v1.1/#event-graphs
An event’s depth is a positive integer that is strictly greater than the depths of any of its parents. The root event should have a depth of 1. Thus if one event is before another, then it must have a strictly smaller depth.
Note that 0 is also valid in some cases where annotated.
Ah, thanks!
Actually I found a place that mentions this:
To order and ease chronological comparison between the events within the graph, homeservers maintain a depth metadata field on each event. An event’s depth is a positive integer that is strictly greater than the depths of any of its parents. The root event should have a depth of 1. Thus if one event is before another, then it must have a strictly smaller depth.
— https://spec.matrix.org/v1.1/#event-graphs
However, "positive integer" to me sounds like it would actually exclude 0, contradicting what you wrote.
Note that 0 is also valid in some cases where annotated.
What's an example of such an annotation? Would it be okay to change that section to say non-negative?
In various places it'll say "May be zero" (or similar). I don't have a link off-hand, sorry.
The intro text is meant to be accurate for the general case, and was the original intent of v1 rooms, but not all rooms in v1 start at 1 because of ancient Synapse issues (as Synapse was Matrix way back then). depth is supposed to be >=1 in v1 and v2+ versions, but there's those exceptions in v1 which make it zero sometimes.
| gharchive/issue | 2022-01-07T19:46:24 | 2025-04-01T06:44:54.413705 | {
"authors": [
"ShadowJonathan",
"jplatte",
"turt2live"
],
"repo": "matrix-org/matrix-doc",
"url": "https://github.com/matrix-org/matrix-doc/issues/3629",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
164276152 | Update AFNetworking to 3.1
https://github.com/matrix-org/matrix-ios-sdk/issues/113
I try to keep the same behaviour we had with the previous AFNetworking version. Specifically for media upload when the app goes in background.
I tested SSL in different situations but I failed to enter in this code https://github.com/matrix-org/matrix-ios-sdk/blob/1feaa3ddc4c1a49f9fb7438ac202721024dbc732/MatrixSDK/Utils/MXHTTPClient.m#L455-L471.
As I cannot understand how we can enter in this else block, I have removed it.
There is an API break in MXRestClient with the introduction of NSProgress.
sgtm
| gharchive/pull-request | 2016-07-07T10:17:18 | 2025-04-01T06:44:54.416067 | {
"authors": [
"giomfo",
"manuroe"
],
"repo": "matrix-org/matrix-ios-sdk",
"url": "https://github.com/matrix-org/matrix-ios-sdk/pull/114",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
1933187155 | weak var Mention context to inject mentions context
Since the NSTextAttachmentProvider can only see the text view, and the pill might require a context that holds various data and helpful functions to make the pill render correctly, this is just a markdown protocol that can be injected as a weak var to get such context.
For example on Element X this context could be the room context, useful to get all the informations of the already fetched members, and the image provider.
The using application will already know what type the context is and just type unwrap it.
Codecov Report
All modified lines are covered by tests :white_check_mark:
Comparison is base (827ae99) 89.94% compared to head (28346ca) 89.91%.
Additional details and impacted files
@@ Coverage Diff @@
## main #840 +/- ##
==========================================
- Coverage 89.94% 89.91% -0.03%
==========================================
Files 114 82 -32
Lines 16306 14767 -1539
Branches 620 0 -620
==========================================
- Hits 14666 13278 -1388
+ Misses 1617 1489 -128
+ Partials 23 0 -23
Flag
Coverage Δ
uitests
?
uitests-ios
?
unittests
89.91% <ø> (+1.24%)
:arrow_up:
unittests-ios
?
unittests-rust
89.91% <ø> (ø)
Flags with carried forward coverage won't be shown. Click here to find out more.
see 32 files with indirect coverage changes
:umbrella: View full report in Codecov by Sentry.
:loudspeaker: Have feedback on the report? Share it here.
SGTM
I might improve it tomorrow, since I was able to run it locally in El-X at the end
| gharchive/pull-request | 2023-10-09T14:14:31 | 2025-04-01T06:44:54.426620 | {
"authors": [
"Velin92",
"codecov-commenter"
],
"repo": "matrix-org/matrix-rich-text-editor",
"url": "https://github.com/matrix-org/matrix-rich-text-editor/pull/840",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2192794966 | State resets are hard to recover from
This issue is using the term "state reset" to refer to the situation where Synapse recalculates room state incorrectly and sends very old room state down /sync.
The proxy is a Matrix client like any other client (e.g Element-Web). This means it is vulnerable to state resets like any other client. Unlike clients however (which can just "clear cache and reload") there is no mechanism to recover from a state resetted room.
This can manifest as rooms spontaneously appearing/disappearing, based on historical state. This is made worse because you can't just do an initial sync on the affected clients and have it self-heal because this is the typical failure mode:
The proxy is told the correct state
The proxy is told incorrect old state. The proxy doesn't know it's old as it's not in its DB and treats this as new.
An initial sync which returns this room returns the correct state.
The proxy ignores the correct state because it has already seen it, and has seen it was superceded by the old state. This could be handled better if MSC4033 was a thing (since the proxy could see the old state has a lower ordering and hence refuse to update the current state).
A native implementation would not have this problem because it does not rely on /sync v2's state calculations.
This has been encountered in the wild by @frebib:nerdhouse.io in #synapse:matrix.org, which amusingly was also the room referred to in https://github.com/element-hq/synapse/issues/8629 - in this case it manifested as as disappearing room.
This can cause UTDs if it happens in an E2EE room.
This is ultimately a synapse bug, but the proxy may be able to mitigate against the worst of this. There's a few heuristics that can be applied:
when replacing room state, check origin_server_ts and assert that the old event has a lower timestamp and the new event has a higher timestamp. Allow a clock skew buffer e.g 24h.
If the "new" state has an older origin_server_ts (>24h) then flag this room as potentially state reset, along with the number of state events affected. Yell about it in Sentry.
For potentially state reset rooms:
determine the true current state for member events:
If the amount of potentially reset events < N, hit /state/m.room.member/$user_id
If the amount of potentially reset events >=N, hit /joined_members.
If the "new" membership doesn't match the membership returned here, then the "new" membership is a state reset, so drop it.
If the "new" membership does match the membership here, then it's a genuine event from a server with a very bad clock skew, allow it.
This means there's two tasks:
Detect when state resets happen and yell about it.
Accept/reject state based on additional HS queries.
I've mentioned a detection threshold of 24h. If this is too high then we won't catch all state resets. If this is too low then we'll catch clock skewed HSes and cause additional traffic on the HS to re-query state. We should begin tracking every state update that breaks temporal causality (that is, the update has a lower timestamp than the state being replaced), so we can monitor what value would be appropriate.
We need to protect against race conditions on the /members queries, so any potential state reset will need to cause the poller to temporarily stop polling to reconcile the true state.
Finally, we need to ensure that a persistently clock skewed HS (malicious or not) cannot cause a DoS on the proxy. This may involve dumping updates for that room into a temporary holding area. A true state reset will be sent to all pollers, meaning we MUST de-duplicate the work.
| gharchive/issue | 2024-03-18T17:05:55 | 2025-04-01T06:44:54.455012 | {
"authors": [
"kegsay"
],
"repo": "matrix-org/sliding-sync",
"url": "https://github.com/matrix-org/sliding-sync/issues/412",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1227725227 | Post Title
Test.
This is some discourse about the post.
| gharchive/issue | 2022-05-06T11:10:03 | 2025-04-01T06:44:54.496239 | {
"authors": [
"mattcroat"
],
"repo": "mattcroat/redesign",
"url": "https://github.com/mattcroat/redesign/issues/1",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
876888151 | fix: replaceColor option on image particle, fixes #1519
@matteobruni
Hi Matteo,
I created a utility method to replace the SVG color, however I wasn't able to get it to work with RGB and HSL, it only works with HEX so I added a TODO comment about it.
And of course, feel free to comment any inconsistencies or point out anything that you think could have been done in a better way.
5c295e6 to 4a19da9:
Updated the title referencing issue #1319 instead of #1519 and changed a return statement DeepCode pointed out.
I just checked, there's already a replaceColorSvg in the ColorUtils.ts file. I think if you update that you'll have the fix.
The ImageDrawer.ts shouldn't be updated because the Particle.ts file already calls it in the particle constructor.
@matteobruni I updated the PR with some refactoring.
I think that the original problem was that in the original replaceColorSvg did not account for the possibility of the loaded SVG image not having a set fill and also the previously used regex in that function, meant to match and replace the SVG fill color, only accounted for hex values, so if the SVG had no fill or had any other fill other than hex then function wouldn't replace the SVG's color.
I've seen the changes, they are not all crystal clear but I commented lines.
Can you run lerna run build in the folder before committing/pushing so the linter will run. I see some indentation issues that are not following the standards (or GH messed up with the viewer)
I ran lerna run build
It changed some indentation on some test file, should I add it with this PR as a commit?
| gharchive/pull-request | 2021-05-05T21:51:49 | 2025-04-01T06:44:54.502581 | {
"authors": [
"GoldenMaximo",
"matteobruni"
],
"repo": "matteobruni/tsparticles",
"url": "https://github.com/matteobruni/tsparticles/pull/1520",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
258305375 | Help Wanted: Explain how to determine a user ID
The user search command can use a user ID instead of a username. Describe how to determine a user ID.
See also #1501.
It looks like the only way to retrieve a user ID is through the API (or by querying the database). There does not seem to be a way to do this through the system console.
You can use the CLI. 'sudo ./platform user search ' returns :
id: m1n7ef9t8tnctmmt8x8bbn4o6a
username: peter_d
nickname:
position:
...
Hi @peterdamian, thanks for raising this.
I can confirm that it's now possible to obtain the user ID via the System Console in System Console > Users (screenshot attached). Would you say that's the best option for users (vs using the CLI)? If so, would you be open to submitting a PR for the CLI docs to reflect this?
Hi @justinegeffen , I would love to add this to docs. And, yeah this might be better way to get user ID (instead of using CLI) in general. Thanks, will update soon. :+1:
@avats-dev - Are you working on this issue?
@Sonichigo, thank you for your interest! I'd like to confirm with @avats-dev whether they're still working on this issue and then we can discuss further. :)
Sure sure!!
@Sonichigo - You're welcome to work on this issue! Would you like me to assign it to you?
I would love to work on this! Can you also provide a bit of context to be kept in mind while working on this, it would be great!!
@Sonichigo Thank you for taking this documentation task on!
There are multiple ways to retrieve a Mattermost User ID.
from the CLI (only available for customers up to v5.39)
from the mmctl (replaces the CLI in v6.0 and later)
from the System Console (the most common ways for a Mattermost System Admin to perform this task)
I recommend reviewing the existing product documentation for each of these areas. Do we have steps documented for all 3 workflows above? If any are missing, identify an appropriate place to document these details, and the path the user would take through the docs to reach that information.
Please let me know if you have questions or concerns.
Thanks for assigning me the issue Carrie! Sure, I'll reach out to you if i have any concerns or questions.
Hey @cwarnermm , I have gone through the product documentation and found these docs. We can search for user and get there userID from running search option, still seeing for the system console option.
https://docs.mattermost.com/manage/command-line-tools.html#mattermost-user-search
https://docs.mattermost.com/manage/mmctl-command-line-tool.html#mmctl-user-search
Hey @cwarnermm , found the additional system admin roles docs and think it would be better to add about system console method here do let me know whether it works or not!
@Sonichigo - The Additional System Admin roles page describes how to assign granular System Console access. On that page, there are a handful of cases where the applicable mmctl command uses userid. So this is certainly one place where this information could live.
I expect that there are numerous situations where a user may need to determine a user ID, and how they determine the ID will likely depend on what they're doing at the time this information is needed. With this in mind, I'm envisioning a new docs page that details all 3 ways of determining a user ID. We could then include links to this new page from other relevant topics. Thoughts?
That make sense! Depending what user is doing and how they are determining the UserID is situation dependent. Do let me know how I could help with these!!
@Sonichigo - The Additional System Admin roles page describes how to assign granular System Console access. On that page, there are a handful of cases where the applicable mmctl command uses userid. So this is certainly one place where this information could live.
I expect that there are numerous situations where a user may need to determine a user ID, and how they determine the ID will likely depend on what they're doing at the time this information is needed. With this in mind, I'm envisioning a new docs page that details all 3 ways of determining a user ID. We could then include links to this new page from other relevant topics. Thoughts?
@cwarnermm any updates on this ?
Happy New Year, @Sonichigo! Are you open to drafting a PR for a new docs page that details all 3 ways of determining a user ID. We could then include links to this new page from other relevant topics and I can help with that step.
Replaced by https://github.com/mattermost/docs/issues/6592
| gharchive/issue | 2017-09-17T14:03:16 | 2025-04-01T06:44:54.533188 | {
"authors": [
"Sonichigo",
"avats-dev",
"cwarnermm",
"justinegeffen",
"peterdamian"
],
"repo": "mattermost/docs",
"url": "https://github.com/mattermost/docs/issues/1502",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
202211817 | UX guidelines for on-screen text
Since there's a PR to update our UX guidelines (#759), I'll create a new issue to track new guidelines for on-screen text, rather than creating a WIP PR for it which would result in merge conflicts:
1 - The title of a setting should explicitly describe what a setting affects.
Example: A setting to default the expanded or collapsed state of link previews
- incorrect: "Link previews"
- correct: "Automatically expand link previews"
2 - Use the active voice
- incorrect: "This action will delete..."
- correct: "This action deletes..."
3 - Be concise
- incorrect: "Are you sure that you want..."
- correct: "Are you sure you want..."
4 - Ensure plural interpretations work when applicable
- incorrect: "...breaks the integration using it"
- correct: "...breaks any integrations using it"
5 - Test text with machine translation to reduce non-obvious, English-specific jardon:
- incorrect: "There are a few configuration settings you might want to fiddle with when setting up your instance of Mattermost"
- correct: “There are a few configuration settings you can adjust when setting up your instance of Mattermost.”
Note: The incorrect string above machine-translated into German and then back into English looks like this: "There are some configuration settings you could know if your instance Matter Most violin".
See https://pre-release.mattermost.com/core/pl/nf9agdb48ife9yrhzw7z4cj6do for more details
Great start!
Propose we include the "Thor test", or refer to it as the embodiment of the "Mattermost voice for documentation and help text." The Thor test asks "How would Agent Coulson explain this to Thor?" which means explaining something to an alien being in simple terms without jargon or complexity.
This is probably the "north star" to think about (include being concise, easy to translate) and the other guidelines seem like finer details/safety net checks.
consideration of surrounding UI, including titles above the input field (e.g. "Notification Preferences for [CHANNEL]) and
consistent with naming of input fields with similar actions, e.g. "Send email notifications", "Send desktop notifications".
See #1084 for proposed changes
| gharchive/issue | 2017-01-20T18:39:09 | 2025-04-01T06:44:54.539447 | {
"authors": [
"it33",
"jasonblais"
],
"repo": "mattermost/docs",
"url": "https://github.com/mattermost/docs/issues/805",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
2460442974 | add unit tests to app/utils/post
Summary
Increase code coverage for app/utils/post from 10.16% to 100%
Ticket Link
https://mattermost.atlassian.net/browse/MM-59575
Checklist
[x] Added or updated unit tests (required for all new features)
[ ] Has UI changes
[ ] Includes text changes and localization file updates
[ ] Have tested against the 5 core themes to ensure consistency between them.
[ ] Have run E2E tests by adding label E2E iOS tests for PR.
Release Note
NONE
@enahum ok to merge this? There's a few others that look ready to merge too
| gharchive/pull-request | 2024-08-12T09:11:41 | 2025-04-01T06:44:54.555529 | {
"authors": [
"enahum",
"jwilander"
],
"repo": "mattermost/mattermost-mobile",
"url": "https://github.com/mattermost/mattermost-mobile/pull/8142",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
737056340 | restricting access for MM only
Hello All:
At this point jitsi is setup and is working within mattermost and outside mattermost.
I would like to limit access to within mattermost only - thereby disabling direct access via the url: (example: meeting.johnson.com).
Any ideas or helpful hints.
Does the JWT authentication solve your use case?
| gharchive/issue | 2020-11-05T15:57:49 | 2025-04-01T06:44:54.557378 | {
"authors": [
"eyebank",
"hanzei"
],
"repo": "mattermost/mattermost-plugin-jitsi",
"url": "https://github.com/mattermost/mattermost-plugin-jitsi/issues/163",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1705142369 | Changed the header for the authorization form
Summary
Ticket Link
Related Pull Requests
Has server changes (please link here)
Has mobile changes (please link here)
Screenshots
Release Note
Hello @KabanovNM,
Thanks for your pull request! A Core Committer will review your pull request soon. For code contributions, you can learn more about the review process here.
Per the Mattermost Contribution Guide, we need to add you to the list of approved contributors for the Mattermost project.
Please help complete the Mattermost contribution license agreement?
Once you have signed the CLA, please comment with /check-cla and confirm that the CLA check is green.
This is a standard procedure for many open source projects.
Please let us know if you have any questions.
We are very happy to have you join our growing community! If you're not yet a member, please consider joining our Contributors community channel to meet other contributors and discuss new opportunities with the core team.
E2E tests not automatically triggered, because PR has no approval yet. Please ask a developer to review and then try again to attach the QA label.
| gharchive/pull-request | 2023-05-11T06:17:59 | 2025-04-01T06:44:54.596690 | {
"authors": [
"KabanovNM",
"mattermost-build"
],
"repo": "mattermost/mattermost-webapp",
"url": "https://github.com/mattermost/mattermost-webapp/pull/12402",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
754525005 | Cypress/E2E: Fix for several flaky tests (part 3)
Summary
Fix for several flaky tests
Submitted on top of https://github.com/mattermost/mattermost-webapp/pull/7142 for ease of (code) review.
All test cases passed 👍
Will try to auto merge this PR once all tests and checks are passing. This might take up to an hour.
| gharchive/pull-request | 2020-12-01T16:06:33 | 2025-04-01T06:44:54.598342 | {
"authors": [
"furqanmlk",
"mattermod",
"saturninoabril"
],
"repo": "mattermost/mattermost-webapp",
"url": "https://github.com/mattermost/mattermost-webapp/pull/7143",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
134726959 | Column misalignment
Bug when pulling multiple columns where data is not available for every time period, results in column misalignment. Code below shows example of problem
con = pdblp.BCon(debug=True)
con.start()
tickers = ["ACTUAL_RELEASE", "BN_SURVEY_MEDIAN", "FIRST_REVISION",
"FIRST_REVISION DATE", "ECO_RELEASE_DT"]
con.bdh("GDP CQOQ Index", tickers, "20140410", "20160210")
Fixed in c149c2a8a4da73a1eaeb12a2f514eba3cad3a3b3
| gharchive/issue | 2016-02-18T23:17:00 | 2025-04-01T06:44:54.634829 | {
"authors": [
"matthewgilbert"
],
"repo": "matthewgilbert/pdblp",
"url": "https://github.com/matthewgilbert/pdblp/issues/2",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
130188565 | Error running jasmine-promises
I'm getting an error when I try and use jasmine-promises.
The error is:
Error: Cannot find module './patch' from '[PATH TO NODE_MODULES]/node_modules/jasmine-promises/dist'
I see the call to require patch.js in the code, but no such file exists in dist/
I'm sure it's something simple I'm doing wrong.
Any help would be much appreciated.
Hi @torontocode,
It works for me. Please can you detail how you're including jasmine-promises in your project?
The project is packaged with browserify, and the ./patch require should resolve to the inline 'module' here https://github.com/matthewjh/jasmine-promises/blob/master/dist/jasmine-promises.js#L14
Sure:
in karma.conf.js:
frameworks: [ 'browserify', 'jasmine' ], preprocessors: { 'tests/**/*.test.js': [ 'browserify' ] }
and then in client.service.test.js:
require('jasmine-promises');
I see the module inline in my code, but it doesn't get found and I get the above error.
I tried adding the path to the file in karma.conf.js and it works fine.
I'm going to close this issue because I don't think there is a problem with this NPM module, I think the problem lies with using require() in test files. It doesn't work as expected.
Ah. Maybe the issue is that you're using browserify to pre-process your test specs, which will cause it to process jasmine-promises. require('jasmine-promises') will execute code already packaged via browserify, which may be the problem -- it's trying to package code that's already browserified.
That's just a theory. I can look into this further.
| gharchive/issue | 2016-01-31T21:30:07 | 2025-04-01T06:44:54.639569 | {
"authors": [
"matthewjh",
"torontocode"
],
"repo": "matthewjh/jasmine-promises",
"url": "https://github.com/matthewjh/jasmine-promises/issues/7",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
1040229854 | Enable use of DataFeedListRow's url in FetchDataFeed
Use the url param of DataFeedListRowto download the datafeed from the datafeedlist results instead of requiring feedId and apiKey in FetchDataFeed
Branch: https://github.com/matthiasbruns/awin-go/compare/feature/1_FetchDataFeed_from_url
| gharchive/issue | 2021-10-30T15:44:35 | 2025-04-01T06:44:54.641234 | {
"authors": [
"matthiasbruns"
],
"repo": "matthiasbruns/awin-go",
"url": "https://github.com/matthiasbruns/awin-go/issues/1",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2650563643 | wrf stop without error d01 2020-08-07_00:00:00 in NOAH DRV
Describe your issue
I can run real and wrf (WPS version 4.5, WRF version 4.5.2) before using w2w. However, after I used w2w, real.exe can be successfully run. The wrf.exe stop without error. In the rsl.error.out, it stopped at:
d01 2020-08-07_00:00:00 in NOAH DRV (when there is only one domain) or d02 2020-08-07_00:00:00 in NOAH DRV (when there are three domains).
I successfully run wrf.exe after w2w before on another HPC with the same version of WPS and WRF. I do everything same with before and the only difference is different HPC, which may means different system environment.
I used a modified metgrid table for ERA5 data, because if I used original metgrid table and w2w, the real.exe will also get some errors.
w2w --version
w2w 0.5.0
nc-config --all
This netCDF 4.9.0 has been built with the following features:
--cc -> /usr/tce/backend/installations/linux-rhel8-x86_64/oneapi-2022.1.0/mvapich2-2.3.7-xl7zrmcp3esvynylj2bhbqaycd3ipxnl/bin/mpicc
--cflags -> -I/usr/tce/backend/installations/linux-rhel8-x86_64/oneapi-2022.1.0/netcdf-c-4.9.0-5c7djlxl33mroeyz7f5npjrsiphp3wt5/include -I/usr/tce/backend/installations/linux-rhel8-x86_64/oneapi-2022.1.0/hdf5-1.14.0-wjnwjw5x67s4gx6unedua5wn3kx4esdj/include -I/usr/tce/backend/installations/linux-rhel8-x86_64/gcc-8.5.0/hdf-4.2.15-6tz2jp2bvlhmgaglcov33zwhvsz24zwh/include -I/usr/tce/backend/installations/linux-rhel8-x86_64/oneapi-2022.1.0/zstd-1.5.2-ernhpjyyy7h66ayt6bwvka6i54wastrt/include
--libs -> -L/usr/tce/backend/installations/linux-rhel8-x86_64/oneapi-2022.1.0/netcdf-c-4.9.0-5c7djlxl33mroeyz7f5npjrsiphp3wt5/lib -lnetcdf
--static -> -ljpeg -lmfhdf -ldf -ljpeg -lhdf5_hl -lhdf5 -lm -lz -lzip -lsz -lbz2 -lzstd -lxml2 -ljpeg
--has-c++ -> no
--cxx ->
--has-c++4 -> no
--cxx4 ->
--has-fortran -> yes
--fc -> /usr/tce/backend/installations/linux-rhel8-x86_64/gcc-10.3.1/intel-oneapi-compilers-2022.1.0-43xp3r52jx2q2rkf3ctzvskqu572xbky/compiler/2022.1.0/linux/bin/ifx
--fflags -> -I/usr/tce/backend/installations/linux-rhel8-x86_64/oneapi-2022.1.0/netcdf-fortran-4.6.0-4qvpnkak7xpvbyiln25ysdzi5ibi6me6/include -I/usr/tce/backend/installations/linux-rhel8-x86_64/oneapi-2022.1.0/netcdf-fortran-4.6.0-4qvpnkak7xpvbyiln25ysdzi5ibi6me6/include
--flibs -> -L/usr/tce/backend/installations/linux-rhel8-x86_64/oneapi-2022.1.0/netcdf-fortran-4.6.0-4qvpnkak7xpvbyiln25ysdzi5ibi6me6/lib -lnetcdff -lnetcdf -lnetcdf -lm
--has-f90 ->
--has-f03 -> yes
--has-dap -> no
--has-dap2 -> no
--has-dap4 -> no
--has-nc2 -> yes
--has-nc4 -> yes
--has-hdf5 -> yes
--has-hdf4 -> yes
--has-logging -> no
--has-pnetcdf -> no
--has-szlib -> yes
--has-cdf5 -> yes
--has-parallel4 -> yes
--has-parallel -> yes
--has-nczarr -> yes
--has-zstd -> yes
--has-benchmarks -> no
--prefix -> /usr/tce/backend/installations/linux-rhel8-x86_64/oneapi-2022.1.0/netcdf-c-4.9.0-5c7djlxl33mroeyz7f5npjrsiphp3wt5
--includedir -> /usr/tce/backend/installations/linux-rhel8-x86_64/oneapi-2022.1.0/netcdf-c-4.9.0-5c7djlxl33mroeyz7f5npjrsiphp3wt5/include
--libdir -> /usr/tce/backend/installations/linux-rhel8-x86_64/oneapi-2022.1.0/netcdf-c-4.9.0-5c7djlxl33mroeyz7f5npjrsiphp3wt5/lib
--version -> netCDF 4.9.0
Installed Packages
affine==2.4.0
aiobotocore @ file:///croot/aiobotocore_1714464387507/work
aiohttp @ file:///croot/aiohttp_1715108769648/work
aioitertools @ file:///tmp/build/80754af9/aioitertools_1607109665762/work
aiosignal @ file:///tmp/build/80754af9/aiosignal_1637843061372/work
alabaster @ file:///croot/alabaster_1718201490751/work
altair @ file:///work/perseverance-python-buildout/croot/altair_1698866618831/work
anaconda-anon-usage @ file:///croot/anaconda-anon-usage_1710965072196/work
anaconda-catalogs @ file:///work/perseverance-python-buildout/croot/anaconda-catalogs_1701755653155/work
anaconda-client @ file:///croot/anaconda-client_1708640631824/work
anaconda-cloud-auth @ file:///croot/anaconda-cloud-auth_1713991386423/work
anaconda-navigator @ file:///croot/anaconda-navigator_1713464016024/work
anaconda-project @ file:///croot/anaconda-project_1706049196717/work
annotated-types @ file:///croot/annotated-types_1709542908624/work
anyio @ file:///croot/anyio_1706220167567/work
appdirs==1.4.4
archspec @ file:///croot/archspec_1709217642129/work
argon2-cffi @ file:///opt/conda/conda-bld/argon2-cffi_1645000214183/work
argon2-cffi-bindings @ file:///work/perseverance-python-buildout/croot/argon2-cffi-bindings_1698877183685/work
arrow @ file:///work/perseverance-python-buildout/croot/arrow_1698877225835/work
asciitree==0.3.3
astroid @ file:///work/perseverance-python-buildout/croot/astroid_1698851155125/work
astropy @ file:///croot/astropy_1717771811817/work
astropy-iers-data @ file:///croot/astropy-iers-data_1717700760580/work
asttokens @ file:///opt/conda/conda-bld/asttokens_1646925590279/work
async-lru @ file:///work/perseverance-python-buildout/croot/async-lru_1701732681408/work
atomicwrites==1.4.0
attrs @ file:///work/perseverance-python-buildout/croot/attrs_1698845858439/work
Automat @ file:///tmp/build/80754af9/automat_1600298431173/work
autopep8 @ file:///croot/autopep8_1708962882016/work
Babel @ file:///work/perseverance-python-buildout/croot/babel_1698863996602/work
bcrypt @ file:///work/perseverance-python-buildout/croot/bcrypt_1698864043363/work
beautifulsoup4 @ file:///croot/beautifulsoup4-split_1718029820055/work
binaryornot @ file:///tmp/build/80754af9/binaryornot_1617751525010/work
black @ file:///croot/black_1717597493389/work
bleach @ file:///opt/conda/conda-bld/bleach_1641577558959/work
blinker @ file:///work/perseverance-python-buildout/croot/blinker_1698846408677/work
bokeh @ file:///croot/bokeh_1718119056511/work
boltons @ file:///work/perseverance-python-buildout/croot/boltons_1698851177130/work
botocore @ file:///croot/botocore_1714460531862/work
Bottleneck @ file:///croot/bottleneck_1709069899917/work
Brotli @ file:///croot/brotli-split_1714483155106/work
cachetools @ file:///croot/cachetools_1713977085444/work
certifi @ file:///croot/certifi_1717618050233/work/certifi
cffi @ file:///croot/cffi_1714483155441/work
cftime==1.6.4
chardet @ file:///work/perseverance-python-buildout/croot/chardet_1698867756032/work
charset-normalizer @ file:///tmp/build/80754af9/charset-normalizer_1630003229654/work
click @ file:///work/perseverance-python-buildout/croot/click_1698845879718/work
click-plugins==1.1.1
cligj==0.7.2
cloudpickle @ file:///work/perseverance-python-buildout/croot/cloudpickle_1698851247129/work
colorama @ file:///work/perseverance-python-buildout/croot/colorama_1698847155966/work
colorcet @ file:///croot/colorcet_1709758348067/work
comm @ file:///croot/comm_1709322850197/work
conda @ file:///croot/conda_1715635703388/work
conda-build @ file:///croot/conda-build_1716991285767/work
conda-content-trust @ file:///croot/conda-content-trust_1714483159009/work
conda-libmamba-solver @ file:///croot/conda-libmamba-solver_1706733287605/work/src
conda-pack @ file:///croot/conda-pack_1710258022278/work
conda-package-handling @ file:///croot/conda-package-handling_1718138267740/work
conda-repo-cli @ file:///croot/conda-repo-cli_1709246574569/work
conda-token @ file:///croot/conda-token_1718995751285/work
conda_index @ file:///croot/conda-index_1719338209492/work
conda_package_streaming @ file:///croot/conda-package-streaming_1718136078615/work
constantly @ file:///croot/constantly_1703165600746/work
contourpy @ file:///work/perseverance-python-buildout/croot/contourpy_1701756524386/work
cookiecutter @ file:///croot/cookiecutter_1711059812130/work
cryptography @ file:///croot/cryptography_1714660666131/work
cssselect @ file:///croot/cssselect_1707339882883/work
cycler @ file:///tmp/build/80754af9/cycler_1637851556182/work
cytoolz @ file:///croot/cytoolz_1701723583781/work
dask @ file:///croot/dask-core_1715838601404/work
dask-expr @ file:///croot/dask-expr_1715846440852/work
datashader @ file:///croot/datashader_1718121497283/work
debugpy @ file:///work/perseverance-python-buildout/croot/debugpy_1698884710808/work
decorator @ file:///opt/conda/conda-bld/decorator_1643638310831/work
defusedxml @ file:///tmp/build/80754af9/defusedxml_1615228127516/work
diff-match-patch @ file:///Users/ktietz/demo/mc3/conda-bld/diff-match-patch_1630511840874/work
dill @ file:///croot/dill_1715094664823/work
distributed @ file:///croot/distributed_1715844453814/work
distro @ file:///croot/distro_1714488253808/work
docstring-to-markdown @ file:///work/perseverance-python-buildout/croot/docstring-to-markdown_1698864372211/work
docutils @ file:///work/perseverance-python-buildout/croot/docutils_1698846509640/work
entrypoints @ file:///work/perseverance-python-buildout/croot/entrypoints_1698864391933/work
et-xmlfile @ file:///work/perseverance-python-buildout/croot/et_xmlfile_1698868193149/work
executing @ file:///opt/conda/conda-bld/executing_1646925071911/work
fasteners==0.19
fastjsonschema @ file:///work/perseverance-python-buildout/croot/python-fastjsonschema_1698846742882/work
filelock @ file:///work/perseverance-python-buildout/croot/filelock_1701733993137/work
flake8 @ file:///croot/flake8_1708965256870/work
Flask @ file:///croot/flask_1716545870149/work
flox==0.9.13
fonttools @ file:///croot/fonttools_1713551344105/work
frozendict @ file:///croot/frozendict_1713194832637/work
frozenlist @ file:///work/perseverance-python-buildout/croot/frozenlist_1698886842990/work
fsspec @ file:///croot/fsspec_1714461537038/work
gensim @ file:///work/perseverance-python-buildout/croot/gensim_1698886961185/work
gitdb @ file:///tmp/build/80754af9/gitdb_1617117951232/work
GitPython @ file:///work/perseverance-python-buildout/croot/gitpython_1698887140758/work
greenlet @ file:///croot/greenlet_1702059959679/work
h5netcdf==1.4.0
h5py @ file:///croot/h5py_1715094721489/work
HeapDict @ file:///Users/ktietz/demo/mc3/conda-bld/heapdict_1630598515714/work
holoviews @ file:///croot/holoviews_1718294635592/work
hvplot @ file:///croot/hvplot_1715090436541/work
hyperlink @ file:///tmp/build/80754af9/hyperlink_1610130746837/work
idna @ file:///croot/idna_1714398848350/work
imagecodecs @ file:///work/perseverance-python-buildout/croot/imagecodecs_1698851963060/work
imageio @ file:///croot/imageio_1707247282708/work
imagesize @ file:///work/perseverance-python-buildout/croot/imagesize_1698864514488/work
imbalanced-learn @ file:///croot/imbalanced-learn_1718132237102/work
importlib-metadata @ file:///croot/importlib_metadata-suite_1704813515092/work
incremental @ file:///croot/incremental_1708639938299/work
inflection @ file:///work/perseverance-python-buildout/croot/inflection_1698869084799/work
iniconfig @ file:///home/linux1/recipes/ci/iniconfig_1610983019677/work
intake @ file:///croot/intake_1717513813848/work
intervaltree @ file:///Users/ktietz/demo/mc3/conda-bld/intervaltree_1630511889664/work
ipykernel @ file:///croot/ipykernel_1705933831282/work
ipython @ file:///croot/ipython_1718287989724/work
ipython-genutils @ file:///tmp/build/80754af9/ipython_genutils_1606773439826/work
ipywidgets @ file:///croot/ipywidgets_1710961502038/work
isort @ file:///croot/isort_1718289883491/work
itemadapter @ file:///tmp/build/80754af9/itemadapter_1626442940632/work
itemloaders @ file:///croot/itemloaders_1708639918324/work
itsdangerous @ file:///croot/itsdangerous_1716533337253/work
jaraco.classes @ file:///tmp/build/80754af9/jaraco.classes_1620983179379/work
jedi @ file:///work/perseverance-python-buildout/croot/jedi_1698864582475/work
jeepney @ file:///tmp/build/80754af9/jeepney_1627537048313/work
jellyfish @ file:///work/perseverance-python-buildout/croot/jellyfish_1698888203179/work
Jinja2 @ file:///croot/jinja2_1716993405101/work
jmespath @ file:///work/perseverance-python-buildout/croot/jmespath_1701734034266/work
joblib @ file:///croot/joblib_1718217211762/work
json5 @ file:///tmp/build/80754af9/json5_1624432770122/work
jsonpatch @ file:///croot/jsonpatch_1714483231291/work
jsonpointer==2.1
jsonschema @ file:///work/perseverance-python-buildout/croot/jsonschema_1701731705290/work
jsonschema-specifications @ file:///work/perseverance-python-buildout/croot/jsonschema-specifications_1701731664072/work
jupyter @ file:///croot/jupyter_1709837193779/work
jupyter-console @ file:///work/perseverance-python-buildout/croot/jupyter_console_1707343324117/work
jupyter-events @ file:///croot/jupyter_events_1718738097486/work
jupyter-lsp @ file:///work/perseverance-python-buildout/croot/jupyter-lsp-meta_1707343107456/work
jupyter_client @ file:///work/perseverance-python-buildout/croot/jupyter_client_1701731788627/work
jupyter_core @ file:///croot/jupyter_core_1718818295206/work
jupyter_server @ file:///croot/jupyter_server_1718827083372/work
jupyter_server_terminals @ file:///work/perseverance-python-buildout/croot/jupyter_server_terminals_1701732095362/work
jupyterlab @ file:///work/perseverance-python-buildout/croot/jupyterlab_1709060583522/work
jupyterlab-pygments @ file:///tmp/build/80754af9/jupyterlab_pygments_1601490720602/work
jupyterlab-widgets @ file:///tmp/build/80754af9/jupyterlab_widgets_1609884341231/work
jupyterlab_server @ file:///work/perseverance-python-buildout/croot/jupyterlab_server_1707343148056/work
keyring @ file:///croot/keyring_1709632513808/work
kiwisolver @ file:///work/perseverance-python-buildout/croot/kiwisolver_1698847502605/work
lazy-object-proxy @ file:///croot/lazy-object-proxy_1712908712693/work
lazy_loader @ file:///croot/lazy_loader_1718176737906/work
lckr_jupyterlab_variableinspector @ file:///work/perseverance-python-buildout/croot/jupyterlab-variableinspector_1709066217947/work
libarchive-c @ file:///tmp/build/80754af9/python-libarchive-c_1617780486945/work
libmambapy @ file:///croot/mamba-split_1714483352891/work/libmambapy
linkify-it-py @ file:///work/perseverance-python-buildout/croot/linkify-it-py_1698889355052/work
llvmlite @ file:///croot/llvmlite_1706910704562/work
lmdb @ file:///work/perseverance-python-buildout/croot/python-lmdb_1698865572354/work
locket @ file:///work/perseverance-python-buildout/croot/locket_1698847555568/work
lxml @ file:///croot/lxml_1715289006750/work
lz4 @ file:///work/perseverance-python-buildout/croot/lz4_1698895161487/work
Markdown @ file:///work/perseverance-python-buildout/croot/markdown_1698870813381/work
markdown-it-py @ file:///work/perseverance-python-buildout/croot/markdown-it-py_1698846045803/work
MarkupSafe @ file:///work/perseverance-python-buildout/croot/markupsafe_1707342994941/work
matplotlib @ file:///croot/matplotlib-suite_1713336378214/work
matplotlib-inline @ file:///work/perseverance-python-buildout/croot/matplotlib-inline_1698864771271/work
mccabe @ file:///opt/conda/conda-bld/mccabe_1644221741721/work
mdit-py-plugins @ file:///work/perseverance-python-buildout/croot/mdit-py-plugins_1698895393342/work
mdurl @ file:///work/perseverance-python-buildout/croot/mdurl_1698845653285/work
menuinst @ file:///croot/menuinst_1718132535249/work
mistune @ file:///work/perseverance-python-buildout/croot/mistune_1698895588147/work
mkl-fft @ file:///work/perseverance-python-buildout/croot/mkl_fft_1698845673361/work
mkl-random @ file:///work/perseverance-python-buildout/croot/mkl_random_1698845720894/work
mkl-service==2.4.0
more-itertools @ file:///work/perseverance-python-buildout/croot/more-itertools_1701747136585/work
mpmath @ file:///work/perseverance-python-buildout/croot/mpmath_1698864994882/work
msgpack @ file:///work/perseverance-python-buildout/croot/msgpack-python_1698846122235/work
multidict @ file:///croot/multidict_1701096859099/work
multipledispatch @ file:///work/perseverance-python-buildout/croot/multipledispatch_1698846147141/work
mypy @ file:///croot/mypy-split_1718008458424/work
mypy-extensions @ file:///work/perseverance-python-buildout/croot/mypy_extensions_1698863276135/work
navigator-updater @ file:///croot/navigator-updater_1718030385782/work
nbclient @ file:///work/perseverance-python-buildout/croot/nbclient_1701731912506/work
nbconvert @ file:///work/perseverance-python-buildout/croot/nbconvert_1701732177957/work
nbformat @ file:///work/perseverance-python-buildout/croot/nbformat_1701731830074/work
nc-time-axis==1.4.1
nest-asyncio @ file:///croot/nest-asyncio_1708532673751/work
netCDF4==1.7.1.post2
networkx @ file:///croot/networkx_1717597493534/work
nltk @ file:///work/perseverance-python-buildout/croot/nltk_1698870894961/work
notebook @ file:///io/croot/notebook_1709244541687/work
notebook_shim @ file:///work/perseverance-python-buildout/croot/notebook-shim_1707343174853/work
numba @ file:///croot/numba_1711986258266/work
numbagg==0.8.2
numcodecs==0.13.0
numexpr @ file:///work/perseverance-python-buildout/croot/numexpr_1698871031164/work
numpy @ file:///croot/numpy_and_numpy_base_1708638617955/work/dist/numpy-1.26.4-cp312-cp312-linux_x86_64.whl#sha256=1d700f51d8b4fa684d858c9e3b56b1656bc5c82b6b79ff08d4e3b491c430059f
numpy-groupies==0.11.2
numpydoc @ file:///croot/numpydoc_1718279157024/work
nvidia-cublas-cu12==12.4.5.8
nvidia-cuda-cupti-cu12==12.4.127
nvidia-cuda-nvrtc-cu12==12.4.127
nvidia-cuda-runtime-cu12==12.4.127
nvidia-cudnn-cu12==9.1.0.70
nvidia-cufft-cu12==11.2.1.3
nvidia-curand-cu12==10.3.5.147
nvidia-cusolver-cu12==11.6.1.9
nvidia-cusparse-cu12==12.3.1.170
nvidia-nccl-cu12==2.21.5
nvidia-nvjitlink-cu12==12.4.127
nvidia-nvtx-cu12==12.4.127
openpyxl @ file:///croot/openpyxl_1714158863747/work
overrides @ file:///work/perseverance-python-buildout/croot/overrides_1701732220415/work
packaging @ file:///croot/packaging_1710807400464/work
pandas @ file:///croot/pandas_1718308974269/work/dist/pandas-2.2.2-cp312-cp312-linux_x86_64.whl#sha256=92c518f7e09edd50b5caa5862636c51d6a29391803f3ada62f68aa52f27d8f92
pandocfilters @ file:///opt/conda/conda-bld/pandocfilters_1643405455980/work
panel @ file:///croot/panel_1718119034739/work
param @ file:///croot/param_1711136841944/work
parsel @ file:///croot/parsel_1707503445438/work
parso @ file:///opt/conda/conda-bld/parso_1641458642106/work
partd @ file:///work/perseverance-python-buildout/croot/partd_1698863295708/work
pathspec @ file:///work/perseverance-python-buildout/croot/pathspec_1698805478393/work
patsy @ file:///croot/patsy_1718378176128/work
pexpect @ file:///tmp/build/80754af9/pexpect_1605563209008/work
pickleshare @ file:///tmp/build/80754af9/pickleshare_1606932040724/work
pillow @ file:///croot/pillow_1714398848491/work
pkce @ file:///work/perseverance-python-buildout/croot/pkce_1698863336124/work
pkginfo @ file:///croot/pkginfo_1715695984887/work
platformdirs @ file:///work/perseverance-python-buildout/croot/platformdirs_1701732573265/work
plotly @ file:///croot/plotly_1718136942809/work
pluggy @ file:///work/perseverance-python-buildout/croot/pluggy_1698805497733/work
ply @ file:///work/perseverance-python-buildout/croot/ply_1698846188635/work
pooch==1.8.2
prometheus-client @ file:///work/perseverance-python-buildout/croot/prometheus_client_1698865353814/work
prompt-toolkit @ file:///croot/prompt-toolkit_1704404351921/work
Protego @ file:///tmp/build/80754af9/protego_1598657180827/work
protobuf==3.20.3
psutil @ file:///work/perseverance-python-buildout/croot/psutil_1698863411559/work
ptyprocess @ file:///tmp/build/80754af9/ptyprocess_1609355006118/work/dist/ptyprocess-0.7.0-py2.py3-none-any.whl
pure-eval @ file:///opt/conda/conda-bld/pure_eval_1646925070566/work
py-cpuinfo @ file:///work/perseverance-python-buildout/croot/py-cpuinfo_1698865397566/work
pyarrow @ file:///croot/pyarrow_1707330824290/work/python
pyarrow-hotfix==0.6
pyasn1 @ file:///Users/ktietz/demo/mc3/conda-bld/pyasn1_1629708007385/work
pyasn1-modules==0.2.8
pycodestyle @ file:///croot/pycodestyle_1701910164456/work
pycosat @ file:///croot/pycosat_1714510623388/work
pycparser @ file:///tmp/build/80754af9/pycparser_1636541352034/work
pyct @ file:///work/perseverance-python-buildout/croot/pyct_1698871492339/work
pycurl @ file:///work/perseverance-python-buildout/croot/pycurl_1698898006285/work
pydantic @ file:///croot/pydantic_1709577974639/work
pydantic_core @ file:///croot/pydantic-core_1709573961665/work
pydeck @ file:///croot/pydeck_1706194064552/work
PyDispatcher @ file:///work/perseverance-python-buildout/croot/pydispatcher_1698871512960/work
pydocstyle @ file:///work/perseverance-python-buildout/croot/pydocstyle_1698865417470/work
pyerfa @ file:///croot/pyerfa_1717700762709/work
-e git+http://github.com/OpenFAST/python-toolbox@09596209e4160c44d43ec1b702994ca1820ebae1#egg=pyFAST
pyflakes @ file:///croot/pyflakes_1708962956225/work
Pygments @ file:///work/perseverance-python-buildout/croot/pygments_1698846270603/work
PyJWT @ file:///croot/pyjwt_1715094751094/work
pylint @ file:///work/perseverance-python-buildout/croot/pylint_1698865458204/work
pylint-venv @ file:///croot/pylint-venv_1709837612550/work
pyls-spyder==0.4.0
pyodbc @ file:///croot/pyodbc_1705431351638/work
pyOpenSSL @ file:///croot/pyopenssl_1708386183891/work
pyparsing @ file:///work/perseverance-python-buildout/croot/pyparsing_1698847881454/work
pyproj==3.7.0
PyQt5==5.15.10
PyQt5-sip @ file:///work/perseverance-python-buildout/croot/pyqt-split_1698847927472/work/pyqt_sip
PyQtWebEngine==5.15.6
PySocks @ file:///work/perseverance-python-buildout/croot/pysocks_1698845478203/work
pytest @ file:///croot/pytest_1717793244625/work
python-dateutil @ file:///croot/python-dateutil_1716495738603/work
python-dotenv @ file:///work/perseverance-python-buildout/croot/python-dotenv_1698846722824/work
python-json-logger @ file:///work/perseverance-python-buildout/croot/python-json-logger_1698873656334/work
python-lsp-black @ file:///croot/python-lsp-black_1709232897954/work
python-lsp-jsonrpc @ file:///croot/python-lsp-jsonrpc_1708962872556/work
python-lsp-server @ file:///croot/python-lsp-server_1708971736441/work
python-slugify @ file:///tmp/build/80754af9/python-slugify_1620405669636/work
python-snappy @ file:///work/perseverance-python-buildout/croot/python-snappy_1698900252503/work
pytoolconfig @ file:///croot/pytoolconfig_1701728692402/work
pytz @ file:///croot/pytz_1713974312559/work
pyviz_comms @ file:///croot/pyviz_comms_1711136832766/work
pywavelets @ file:///croot/pywavelets_1705049820073/work
pyxdg @ file:///tmp/build/80754af9/pyxdg_1603822279816/work
PyYAML @ file:///work/perseverance-python-buildout/croot/pyyaml_1698849903511/work
pyzmq @ file:///croot/pyzmq_1705605076900/work
QDarkStyle @ file:///croot/qdarkstyle_1709231003551/work
qstylizer @ file:///work/perseverance-python-buildout/croot/qstylizer_1698900556042/work/dist/qstylizer-0.2.2-py2.py3-none-any.whl#sha256=6d40f32007acd9673a44b9d879388ca3e09a68f0342daa31cab082bc596303c8
QtAwesome @ file:///work/perseverance-python-buildout/croot/qtawesome_1698900594447/work
qtconsole @ file:///croot/qtconsole_1709231153903/work
QtPy @ file:///work/perseverance-python-buildout/croot/qtpy_1701733558435/work
queuelib @ file:///work/perseverance-python-buildout/croot/queuelib_1698873899800/work
rasterio==1.4.1
referencing @ file:///work/perseverance-python-buildout/croot/referencing_1701731622327/work
regex @ file:///work/perseverance-python-buildout/croot/regex_1698863574928/work
requests @ file:///croot/requests_1716902831423/work
requests-file @ file:///Users/ktietz/demo/mc3/conda-bld/requests-file_1629455781986/work
requests-toolbelt @ file:///work/perseverance-python-buildout/croot/requests-toolbelt_1698846872000/work
rfc3339-validator @ file:///work/perseverance-python-buildout/croot/rfc3339-validator_1698873973408/work
rfc3986-validator @ file:///work/perseverance-python-buildout/croot/rfc3986-validator_1698874010100/work
rich @ file:///work/perseverance-python-buildout/croot/rich_1698846912932/work
rioxarray==0.17.0
rope @ file:///croot/rope_1708963153974/work
rpds-py @ file:///work/perseverance-python-buildout/croot/rpds-py_1698982686240/work
Rtree @ file:///work/perseverance-python-buildout/croot/rtree_1698900851071/work
ruamel-yaml-conda @ file:///work/perseverance-python-buildout/croot/ruamel_yaml_1698874047131/work
ruamel.yaml @ file:///work/perseverance-python-buildout/croot/ruamel.yaml_1698863605521/work
s3fs @ file:///croot/s3fs_1714466037529/work
scikit-image @ file:///croot/scikit-image_1718285223463/work
scikit-learn @ file:///croot/scikit-learn_1714164736727/work
scipy @ file:///croot/scipy_1717521478074/work/dist/scipy-1.13.1-cp312-cp312-linux_x86_64.whl#sha256=a60c1bdf983ee297b25e41830d00cc2af50f00bcb47e55578703fef1594ec56e
Scrapy @ file:///croot/scrapy_1708714682090/work
seaborn @ file:///croot/seaborn_1718302919398/work
SecretStorage @ file:///work/perseverance-python-buildout/croot/secretstorage_1698846935813/work
semver @ file:///croot/semver_1709243621175/work
Send2Trash @ file:///work/perseverance-python-buildout/croot/send2trash_1701732325573/work
service-identity @ file:///Users/ktietz/demo/mc3/conda-bld/service_identity_1629460757137/work
setuptools==69.5.1
sip @ file:///work/perseverance-python-buildout/croot/sip_1698846977033/work
six @ file:///tmp/build/80754af9/six_1644875935023/work
smart-open @ file:///work/perseverance-python-buildout/croot/smart_open_1698863627550/work
smmap @ file:///tmp/build/80754af9/smmap_1611694433573/work
sniffio @ file:///croot/sniffio_1705431295498/work
snowballstemmer @ file:///tmp/build/80754af9/snowballstemmer_1637937080595/work
sortedcontainers @ file:///tmp/build/80754af9/sortedcontainers_1623949099177/work
soupsieve @ file:///work/perseverance-python-buildout/croot/soupsieve_1698866207280/work
Sphinx @ file:///croot/sphinx_1718275386998/work
sphinxcontrib-applehelp @ file:///home/ktietz/src/ci/sphinxcontrib-applehelp_1611920841464/work
sphinxcontrib-devhelp @ file:///home/ktietz/src/ci/sphinxcontrib-devhelp_1611920923094/work
sphinxcontrib-htmlhelp @ file:///tmp/build/80754af9/sphinxcontrib-htmlhelp_1623945626792/work
sphinxcontrib-jsmath @ file:///home/ktietz/src/ci/sphinxcontrib-jsmath_1611920942228/work
sphinxcontrib-qthelp @ file:///home/ktietz/src/ci/sphinxcontrib-qthelp_1611921055322/work
sphinxcontrib-serializinghtml @ file:///croot/sphinxcontrib-serializinghtml_1718201486943/work
spyder @ file:///croot/spyder_1710523693130/work
spyder-kernels @ file:///croot/spyder-kernels_1707937713588/work
SQLAlchemy @ file:///croot/sqlalchemy_1718378176902/work
stack-data @ file:///opt/conda/conda-bld/stack_data_1646927590127/work
statsmodels @ file:///croot/statsmodels_1718381181899/work
streamlit @ file:///croot/streamlit_1710262506170/work
sympy==1.13.1
tables @ file:///croot/pytables_1705614842881/work
tabulate @ file:///work/perseverance-python-buildout/croot/tabulate_1701748610754/work
tblib @ file:///Users/ktietz/demo/mc3/conda-bld/tblib_1629402031467/work
tenacity @ file:///work/perseverance-python-buildout/croot/tenacity_1698863691366/work
terminado @ file:///work/perseverance-python-buildout/croot/terminado_1698874934826/work
text-unidecode @ file:///Users/ktietz/demo/mc3/conda-bld/text-unidecode_1629401354553/work
textdistance @ file:///tmp/build/80754af9/textdistance_1612461398012/work
threadpoolctl @ file:///Users/ktietz/demo/mc3/conda-bld/threadpoolctl_1629802263681/work
three-merge @ file:///tmp/build/80754af9/three-merge_1607553261110/work
tifffile @ file:///work/perseverance-python-buildout/croot/tifffile_1698866341575/work
tinycss2 @ file:///work/perseverance-python-buildout/croot/tinycss2_1698875051986/work
tldextract @ file:///opt/conda/conda-bld/tldextract_1646638314385/work
toml @ file:///tmp/build/80754af9/toml_1616166611790/work
tomli @ file:///work/perseverance-python-buildout/croot/tomli_1698805555176/work
tomlkit @ file:///work/perseverance-python-buildout/croot/tomlkit_1698847005123/work
toolz @ file:///work/perseverance-python-buildout/croot/toolz_1698846367152/work
torch==2.5.1
torchvision==0.20.1
tornado @ file:///croot/tornado_1718740109488/work
tqdm @ file:///croot/tqdm_1716395931952/work
traitlets @ file:///croot/traitlets_1718227057033/work
triton==3.1.0
truststore @ file:///work/perseverance-python-buildout/croot/truststore_1701735771625/work
Twisted @ file:///croot/twisted_1708702809815/work
typing_extensions @ file:///croot/typing_extensions_1715268824938/work
tzdata @ file:///croot/python-tzdata_1690578112552/work
uc-micro-py @ file:///work/perseverance-python-buildout/croot/uc-micro-py_1698875458503/work
ujson @ file:///croot/ujson_1717597529825/work
unicodedata2 @ file:///croot/unicodedata2_1713212950228/work
Unidecode @ file:///tmp/build/80754af9/unidecode_1614712377438/work
urllib3 @ file:///croot/urllib3_1718912636303/work
vtk==9.3.1
w2w==0.1.0
w3lib @ file:///Users/ktietz/demo/mc3/conda-bld/w3lib_1629359764703/work
watchdog @ file:///croot/watchdog_1717166512516/work
wcwidth @ file:///Users/ktietz/demo/mc3/conda-bld/wcwidth_1629357192024/work
webencodings @ file:///work/perseverance-python-buildout/croot/webencodings_1698866454420/work
websocket-client @ file:///croot/websocket-client_1715878298792/work
Werkzeug @ file:///croot/werkzeug_1716533313107/work
whatthepatch @ file:///work/perseverance-python-buildout/croot/whatthepatch_1698866493803/work
wheel==0.43.0
widgetsnbextension @ file:///croot/widgetsnbextension_1710960037596/work
wrapt @ file:///work/perseverance-python-buildout/croot/wrapt_1698851040578/work
wurlitzer @ file:///work/perseverance-python-buildout/croot/wurlitzer_1698906481008/work
xarray @ file:///work/perseverance-python-buildout/croot/xarray_1698851063735/work
xyzservices @ file:///work/perseverance-python-buildout/croot/xyzservices_1698866513675/work
yapf @ file:///croot/yapf_1708964320665/work
yarl @ file:///croot/yarl_1701105127787/work
zarr==2.18.3
zict @ file:///work/perseverance-python-buildout/croot/zict_1698876364831/work
zipp @ file:///work/perseverance-python-buildout/croot/zipp_1707367835224/work
zope.interface @ file:///work/perseverance-python-buildout/croot/zope.interface_1698866533743/work
zstandard @ file:///croot/zstandard_1714677652653/work
Traceback
I installed new version of w2w using pip, but it show some error such xrray. Therefore, I replace the new w2w.py with old-version w2w.py. But it successfully run on another HPC.
I do everything same with before and the only difference is different HPC, which may means different system environment.
Mhm, as you suggest here, the issue likely is in your hpc setup.
It's hard or even impossible to diagnose what's wrong.
You will probably have to go through the pain of tediously comparing both setups. I would suggest creating a setup script while doing this so you can reproduce that easily.
Since there is no error coming from w2w, I'm not sure how I could help here...
Thank you so much for your reply. I can successfully run read and wrf.exe with modis landuse data before running w2w. I am wondering what change before and after w2w will be related the system environment, because the geo_em file type shall be same after using w2w.
I think I found the problem, my wps and wrf generate files in 64-bit offset netcdf format, but the w2w generate netCDF-4. I am finding the way to solved it. Can I simply convert the data format or I have to recompile the wps and wrf.
Great, thanks for looking into that. That may help others! There was #84 - is that in any way related to your issue?
| gharchive/issue | 2024-11-11T22:39:01 | 2025-04-01T06:44:54.654752 | {
"authors": [
"drq95",
"jkittner"
],
"repo": "matthiasdemuzere/w2w",
"url": "https://github.com/matthiasdemuzere/w2w/issues/137",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
294449993 | parseFloat not sorting as expected if column contains negative values
If I have a column that contains both positive and negative float values, the sort order is off. When sort order is descending, the negative values precede the zero values. When it's ascending, the zero values precede the negative values which precede the positive values. I wrote my own parseFloat filter filter to correct it:
.filter( 'myParseFloat', function() { return function(input) { var float = parseFloat(input); return isNaN(float) ? null : float; }; } )
<th ts-criteria="netPrem|myParseFloat">Net Prem</th>
Merged PR #106 which should fix this.
| gharchive/issue | 2018-02-05T16:13:50 | 2025-04-01T06:44:54.689175 | {
"authors": [
"mattiash",
"shireknight"
],
"repo": "mattiash/angular-tablesort",
"url": "https://github.com/mattiash/angular-tablesort/issues/105",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
454061169 | What's the deal with the last release?
I am a bit confused about the latest version 1.10. This version has the same version number as the previous release and is therefore not loaded, at least with go modules. To work around this problem I deleted my local version and reloaded it with go get -u github.com/mattn/go-sqlite3. Afterwards, I had the latest 1.10 version. This versioning can lead to strange behavior, since two developers on a project use the same version number of go-sqlite3 but a different code base.
Furthermore, it says in the Changelog that SQLite version 3.28 is used now, but in the file sqlite3-binding.h it says 3.25.2. Also, here on GitHub, in the source code of the release and in the corresponding branch, version 3.25.2 is still stored.
Did I do something wrong or did I get it wrong?
Thank you for your work and in advance for the answer.
What do you mean same version? go-sqlite3 does not have go.mod.
It is not necessary that go-sqlite has a go.mod. If a project has a go.mod and this project has go-sqlite as a dependency, the last tagged version of go-sqlite is automatically used. The last tagged version of go-sqlite was 1.10. A few days ago you created a new release of go-sqlite which has the version number 1.10 again.
Example: If developer A loaded go-sqlite a month ago, the old version 1.10 was loaded automatically. Developer B starts development today, pulls out the current project and loads the go.mod dependencies. In the go.mod version 1.10 of go-sqlite is defined which is then loaded from GitHub. This version is newer than developer A. So developer A and B have the same version number (1.10) but different versions.
Or are there no differences between the old and the new 1.10 version?
If go-sqlite3 does not have go.mod, Go uses older commit in same version.
The commits which pushed into master branch after 1.10.0 is not used until I will tag next version. (If you use tag v1.10.0). If you want to follow latest version, you should use @latest.
If go-sqlite3 does not have go.mod, Go uses older commit in same version.
I think it works differently with modules. Do you think you could increase the version number for future releases? :blush: This would make it easier to use the latest version with a versioning system (like dep, module, etc.).
The commits which pushed into master branch after 1.10.0 is not used until I will tag next version.
That makes sense. You wrote in the release notes that SQLite 3.28.0 is now being used, but the commit for the change was added after the 1.10 tag. Compare of v1.10 and master
Is the master stable enough for productive use?
Thanks for your time
If go-sqlite3 does not have go.mod, Go uses older commit in same version.
I think it works differently with modules. Do you think you could increase the version number for future releases? 😊 This would make it easier to use the latest version with a versioning system (like dep, module, etc.).
The commits which pushed into master branch after 1.10.0 is not used until I will tag next version.
That makes sense. You wrote in the release notes that SQLite 3.28.0 is now being used, but the commit for the change was added after the 1.10 tag. Compare of v1.10 and master
Is the master stable enough for productive use?
Thanks for your time
Yes master is stable enough for production use.
| gharchive/issue | 2019-06-10T08:43:55 | 2025-04-01T06:44:54.761352 | {
"authors": [
"GJRTimmer",
"Vorlif",
"mattn"
],
"repo": "mattn/go-sqlite3",
"url": "https://github.com/mattn/go-sqlite3/issues/723",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2234490887 | update to latest dev branch and use package manager
Bump zig support to latest developement release
(0.12.0-dev.3596+fbb38a768). This includes switching to the included packagemanager.
This needs to be rebased after the following PRs are merged or otherwise resolved:
https://github.com/mattnite/zig-mbedtls/pull/8
https://github.com/mattnite/zig-libssh2/pull/8
https://github.com/mattnite/zig-zlib/pull/14
| gharchive/pull-request | 2024-04-09T23:49:58 | 2025-04-01T06:44:54.764441 | {
"authors": [
"tobiaskohlbau"
],
"repo": "mattnite/zig-libgit2",
"url": "https://github.com/mattnite/zig-libgit2/pull/3",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
899891021 | Please add links to places where people can find experienced help for customization of their template
I really suck at customizing these go template fragment files, and would rather have someone else come up with a more professional looking design. I wonder where all of you get help? Surely you're not all web designers?
I've realized I have bigger problems, it's not just upgrading castanet that breaks everything, but also upgrading hugo (on its own) breaks everything. So I would have to first find a hugo specialist to upgrade hugo to the latest. Then I would have to find a castanet specialist to upgrade the castanet template.
So some basic info on where to hire hugo peope, and template customization people (for castanet) would help.
At this stage, I've come to terms with leaving everything as is, and am considering switching to something else entirely, it's a lot more work to maintain than I'm comfortable with. But knowing where help is available might make others feel more confident in choosing hugo + castanet.
| gharchive/issue | 2021-05-24T18:18:13 | 2025-04-01T06:44:54.785592 | {
"authors": [
"lestephane"
],
"repo": "mattstratton/castanet",
"url": "https://github.com/mattstratton/castanet/issues/366",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
963321696 | Stop stealing from the dead and disable steal if no one to steal from
Summary
Previously you could steal from players who were not alive. In addition, you could still choose to steal even if there were no options to steal. This work only allows stealing from players who are alive and disables the steal button if there's no one to steal from.
Changes
ActionButtons.tsx - Takes in the list of other players and checks to see if anyone can be stolen from. If not, the steal button is disabled.
Actions.tsx - Pass other players to ActionButtons and add alive check to steal verification function.
Stop Captain from stealing from dead people and disable steal button if no one can be stolen from
| gharchive/pull-request | 2021-08-07T23:45:00 | 2025-04-01T06:44:54.788143 | {
"authors": [
"mattwells19"
],
"repo": "mattwells19/Cooped-Up",
"url": "https://github.com/mattwells19/Cooped-Up/pull/38",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1052753876 | include guard さん...
Issue 番号
close #29
変更内容
ugoke
たぶん動きます
マージしちゃうか
| gharchive/pull-request | 2021-11-13T19:30:25 | 2025-04-01T06:44:54.789730 | {
"authors": [
"matumoto1234"
],
"repo": "matumoto1234/library",
"url": "https://github.com/matumoto1234/library/pull/30",
"license": "Unlicense",
"license_type": "permissive",
"license_source": "github-api"
} |
318935902 | move cert check to dispatcher
Move the cert check to the dispatcher to avoid checking the cert again for every batch.
ping @matzew @ziccardi
Tested with an outdated cert and got this which should be fine:
15:37:34,146 INFO [org.jboss.aerogear.unifiedpush.message.NotificationDispatcher] (Thread-1 (ActiveMQ-client-global-threads)) Received UnifiedPushMessage from JMS queue, will now trigger the Push Notification delivery for the %s variant (ios)
15:37:34,151 ERROR [org.jboss.aerogear.unifiedpush.message.sender.apns.ApnsUtil] (Thread-1 (ActiveMQ-client-global-threads)) Provided APNs .p12 file is expired or not yet valid
15:37:34,151 ERROR [org.jboss.aerogear.unifiedpush.message.NotificationDispatcher] (Thread-1 (ActiveMQ-client-global-threads)) Provided variant has invalid credentials. No push messages are sent to network
| gharchive/pull-request | 2018-04-30T15:40:05 | 2025-04-01T06:44:54.793595 | {
"authors": [
"pb82"
],
"repo": "matzew/aerogear-unifiedpush-server",
"url": "https://github.com/matzew/aerogear-unifiedpush-server/pull/16",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
1663512165 | 🛑 VQ PROD SV is down
In 63a07a3, VQ PROD SV (http://demolatam.sidesys.com.ar/VirtualQueue_SV/index.html) was down:
HTTP code: 500
Response time: 15992 ms
Resolved: VQ PROD SV is back up in ed151c7.
| gharchive/issue | 2023-04-12T01:35:12 | 2025-04-01T06:44:54.798348 | {
"authors": [
"mauribesson"
],
"repo": "mauribesson/upptime",
"url": "https://github.com/mauribesson/upptime/issues/1241",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1174231985 | 🛑 VQ PROD AR is down
In 1ce43aa, VQ PROD AR (http://demoarg.sidesys.com.ar/virtualqueue/index.html) was down:
HTTP code: 0
Response time: 0 ms
Resolved: VQ PROD AR is back up in 3c4d8b8.
| gharchive/issue | 2022-03-19T10:25:55 | 2025-04-01T06:44:54.801063 | {
"authors": [
"mauribesson"
],
"repo": "mauribesson/upptime",
"url": "https://github.com/mauribesson/upptime/issues/255",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1158545396 | Missing some Lenovo Drivers for Windows 10 21H2
Hey Guys,
since we want to change our sccm deployment to Win10 21H2, i want to upgrade our Driver Packages in SCCM.
Some drivers are missing, like L15, E15, P51s or T570.
I use the Driver Automation Tool Version 7.0.1.
Can someone help me or has the same problem?
Thank you.
Yes. I have the same problem. I think Lenovo must have changed the xml file.
I was in version 7.0.4. I switched back to version 6.5.6, it works.
Resolved by Jabbaxx
https://github.com/maurice-daly/DriverAutomationTool/issues/308
| gharchive/issue | 2022-03-03T15:21:41 | 2025-04-01T06:44:54.803491 | {
"authors": [
"azulol",
"fredericlassalle"
],
"repo": "maurice-daly/DriverAutomationTool",
"url": "https://github.com/maurice-daly/DriverAutomationTool/issues/314",
"license": "BSD-2-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
608455906 | Now, the calls to method reference are counted in the RFC metrics
I disabled one test method, bc no works well.
(@mauricioaniche ) wrote in README.md that RFC fails when a method has overloads with same number of parameters, but different types.
So I disabled this test case, it should be 2, not 1, but I go to improve RFC metric.
Hm, something went wrong
@maykon-oliveira shall I close this one and we reopen when you get back to this ?
Yes, ok
| gharchive/pull-request | 2020-04-28T16:34:36 | 2025-04-01T06:44:54.805513 | {
"authors": [
"mauricioaniche",
"maykon-oliveira"
],
"repo": "mauricioaniche/ck",
"url": "https://github.com/mauricioaniche/ck/pull/58",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1379193090 | 父级容器使用 transform: scale 动画卡顿
pc电脑分辨率不够,使用transform: scale缩小父级容器,移动画布内的元素,动画不流畅
代码如下:
<template>
<div style="user-select:none" ref="winMainRef">
<div :style="'width:'+canva.width*canva.scale+'px;height:'+canva.height*canva.scale+'px;overflow: hidden; border: 1px solid #ccc;'">
<div class="win-content p-r"
:style="'width:'+canva.width+'px;height:'+canva.height+'px;transform: scale('+ canva.scale +');'">
<VueDraggableResizable :parent="true" :w="dom.w" :h="dom.h" :x="dom.x" :y="dom.y" class="coverage-video">
<p>你可以拖着我,按照自己的意愿调整大小。</p>
</VueDraggableResizable>
</div>
</div>
</div>
</template>
<script>
import VueDraggableResizable from 'vue-draggable-resizable'
import 'vue-draggable-resizable/dist/VueDraggableResizable.css'
export default {
name: "HelloWorld",
components:{
VueDraggableResizable
},
data() {
return {
canva: {
width: 5760,
height: 1080,
scale: 0.3125,
},
dom:{
x:0,
y:0,
w:1024,
h:768
}
};
},
mounted() {
this.windowResize()
window.onresize = () => {
return (() => {
this.windowResize()
})();
};
},
methods: {
windowResize :function(){
let winMainRef = this.$refs.winMainRef;
const _winMainRefW = winMainRef.clientWidth;
let winScale = (_winMainRefW - 120) / this.canva.width;
this.canva.scale = winScale
},
},
};
</script>
<style>
.p-r {
position: relative;
}
.win-content {
background-color: #e4e4e4;
transform-origin: left top;
}
.coverage-video {
background-color: rgb(0, 0, 0);
border-width: 6.46102px;
border-color: #0ff;
}
</style>
不是卡顿, 是偏移错误, 需要优化一下 使用缩放时 拖动的偏移量
不是卡顿, 是偏移错误, 需要优化一下 使用缩放时 拖动的偏移量
啥意思?是要在onResize和onDrag中将变化的值再放大?
找了解决办法,VueDraggableResizable组件有缩放参数,scale
| gharchive/issue | 2022-09-20T10:35:55 | 2025-04-01T06:44:54.810644 | {
"authors": [
"Angle-bo",
"ents1008"
],
"repo": "mauricius/vue-draggable-resizable",
"url": "https://github.com/mauricius/vue-draggable-resizable/issues/359",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
516794351 | consider automating contributors/authors
Automate adding contributors to AUTHORS.md
@allcontributors have a bot to Recognize All Contributors - Including those that don't push code
https://allcontributors.org/
Bot Configuration
{
"files": ["AUTHORS.md"],
"imageSize": 100,
"contributorsPerLine": 7,
"badgeTemplate": "[](#contributors)",
"contributorTemplate": "<%= avatarBlock %><br /><%= contributions %>",
"types": {
"custom": {
"symbol": "🦄",
"description": "Forum contributions.",
"link": "[<%= symbol %>](<%= url %> \"<%= description %>\"),"
}
},
"skipCi": "true",
"contributors": []
}
@npracht are you 🔢 Dr Who?
It says you'll be adding in 1 hour 😁
I know... That's impressive xD
This is really nice! I don't think we're using that file but we could definitely look to incorporate it - let me check in with the product folk!
| gharchive/issue | 2019-11-03T07:58:54 | 2025-04-01T06:44:54.822072 | {
"authors": [
"RCheesley",
"npracht",
"virgilwashere"
],
"repo": "mautic/documentation",
"url": "https://github.com/mautic/documentation/issues/362",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
910412152 | Mautic 3 missing docs
We seem to have a very large 'todo' in the docs about how to update to Mautic 3 - this should address that in a basic sense.
It could be padded out with screenshots but I have not got the time to do that at present, so if someone would like to take that on please do!
@RCheesley looks good to me
| gharchive/pull-request | 2021-06-03T11:45:16 | 2025-04-01T06:44:54.823288 | {
"authors": [
"RCheesley",
"fakela"
],
"repo": "mautic/mautic-documentation",
"url": "https://github.com/mautic/mautic-documentation/pull/214",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
509666927 | ibm data and resources for provider version 0.17.6
I've generated the json data for the IBM Cloud provider v0.17.6 (https://ibm-cloud.github.io/tf-ibm-docs/index.html) using similar approach as the scraper you used (https://github.com/l2fprod/ibmcloud-terraform-provider-docs-scrape).
Would you consider adding the JSON to the extension?
@mauve
in the meantime - a fork is available there https://marketplace.visualstudio.com/items?itemName=l2fprod.terraform-fork
We just released v2.0.0-rc.1 of the extension. The main features include:
Added syntax support for 0.12
Added terraform-ls usage by default (currently on 0.3.0, which offers basic provider code completion)
You can find additional information and specifics in the release notes and CHANGELOG.
With this release we expect that many of the prior issues and PRs are no longer relevant or have been addressed, and are therefore being closed. If you feel the action taken on an issue or PR is in error, please comment as such and we can figure out the appropriate way to address it.
We plan to add the final 2.0.0 release to the marketplace soon, but are actively seeking your feedback now on the release candidates. You can download the .vsix from the releases page and manually install it in VS Code to try it out.
| gharchive/pull-request | 2019-10-20T23:11:01 | 2025-04-01T06:44:54.883138 | {
"authors": [
"l2fprod",
"paultyng"
],
"repo": "mauve/vscode-terraform",
"url": "https://github.com/mauve/vscode-terraform/pull/234",
"license": "bsd-3-clause",
"license_type": "permissive",
"license_source": "bigquery"
} |
1419023912 | Improve HTML structure
Based on the other dropdown’s HTML structure it looks like every link
should be nested in a list item element.
related: #8ea1f828
What other dropdown's HTML are you talking about Juri?
I think it should be li inside of a instead of a inside of li
Because then clicking the li doesn't redirect.
Also, with this change the horizontal scroll comes up on the drop-down
@max-programming li inside of a is invalid HTML, s. https://developer.mozilla.org/en-US/docs/Web/HTML/Element/li
It must be contained in a parent element: an ordered list (<ol>), an unordered list (<ul>), or a menu (<menu>)
Furthermore, nesting li in a resulted in the hydration error fixed by your commit 8ea1f828a779ed777d6ec72e6df62ea34dbd86bf. Nesting li in a is definitely wrong.
What other dropdown's HTML are you talking about Juri?
I was thinking about the language selector just above but I’ve misread. It nests all a in one single li also which should also be all nested in their own li in my opinion as we have a list of items.
Because then clicking the li doesn't redirect.
Not sure what you mean here.
Also, with this change the horizontal scroll comes up on the drop-down
Let me rework this. This wasn’t the intention.
Not sure what you mean here.
I mean if the anchor is nested in the li, only clicking the text will navigate to another page and not area around the text.
Because only the text is covered around the a tag
| gharchive/pull-request | 2022-10-22T01:45:15 | 2025-04-01T06:44:55.036467 | {
"authors": [
"max-programming",
"ur5us"
],
"repo": "max-programming/hacktoberfest-projects",
"url": "https://github.com/max-programming/hacktoberfest-projects/pull/94",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1520977873 | Broken Channel: Steezy Coffee Shop
Youtube Error 150. Video can not be played in embedded players.
https://lofimusic.app/steezyasfuck-coffee-show
there is a pull request to update this URL
| gharchive/issue | 2023-01-05T15:59:35 | 2025-04-01T06:44:55.039303 | {
"authors": [
"rafedaniels",
"stuartskelton"
],
"repo": "maxence-charriere/lofimusic",
"url": "https://github.com/maxence-charriere/lofimusic/issues/42",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2655415854 | [Bug]: Frame copy failed on wayland
Steps for reproducing the issue
Just run app
What is the buggy behavior?
Not working due Frame copy failed
What is the expected behavior?
To change backlight
Logs
[2024-11-13T12:56:52Z DEBUG wluma] Using Config {
als: Time {
thresholds: {
9: "dim",
16: "normal",
13: "bright",
18: "dark",
0: "night",
20: "night",
7: "dark",
11: "normal",
},
},
output: [
Backlight(
BacklightOutput {
name: "eDP-1",
path: "/sys/class/backlight/nvidia_0",
capturer: Wayland(
Any,
),
min_brightness: 1,
},
),
Backlight(
BacklightOutput {
name: "keyboard-dell",
path: "/sys/bus/platform/devices/dell-laptop/leds/dell::kbd_backlight",
capturer: None,
min_brightness: 0,
},
),
],
}
[2024-11-13T12:56:52Z DEBUG wluma::brightness::backlight] Using direct write on /sys/class/backlight/nvidia_0 to change brightness value
[2024-11-13T12:56:52Z WARN wluma] Skipping 'keyboard-dell' as it might be disconnected: No such file or directory (os error 2)
[2024-11-13T12:56:52Z INFO wluma] Continue adjusting brightness and wluma will learn your preference over time.
[2024-11-13T12:56:52Z DEBUG wluma::frame::capturer::wayland] Detected support for wlr-screencopy-unstable-v1 protocol
[2024-11-13T12:56:52Z DEBUG wluma::frame::capturer::wayland] Detected support for linux-dmabuf-v1 protocol
[2024-11-13T12:56:52Z DEBUG wluma::frame::capturer::wayland] Using output 'Najing CEC Panda FPD Technology CO. ltd 0x004D (eDP-1)' for config 'eDP-1'
[2024-11-13T12:56:52Z DEBUG wluma::frame::capturer::wayland] Using wlr-screencopy-unstable-v1 protocol to request frames
[2024-11-13T12:56:53Z DEBUG wluma::frame::capturer::wayland] Frame copy failed
[2024-11-13T12:56:54Z DEBUG wluma::frame::capturer::wayland] Frame copy failed
[2024-11-13T12:56:55Z DEBUG wluma::frame::capturer::wayland] Frame copy failed
[2024-11-13T12:56:56Z DEBUG wluma::frame::capturer::wayland] Frame copy failed
[2024-11-13T12:56:57Z DEBUG wluma::frame::capturer::wayland] Frame copy failed
Version
4.5.1-1 from aur
Environment
NAME="Arch Linux"
BUILD_ID=rolling
Linux 6.11.7-zen1-1-zen #1 ZEN SMP PREEMPT_DYNAMIC Fri, 08 Nov 2024 17:57:58 +0000 x86_64 GNU/Linux
clang 18.1.8-4
hyprland-git 0.45.0.r17.bb160cfe-1
lib32-vulkan-icd-loader 1.3.295-1
rustup 1.27.1-1
vulkan-headers 1:1.3.295-1
vulkan-icd-loader 1.3.295-1
xdg-desktop-portal-hyprland 1.3.8-1
WLR_DRM_NO_MODIFIERS=
Hmmm, do you have multiple GPUs? Could you try to force usage of a specific one to see if it helps? Like so https://github.com/maximbaz/wluma/issues/63#issuecomment-2213371571
do you have multiple GPUs?
@maximbaz , I have only one GPU (it is Nvidia)
Unfortunately I'm really not sure how to debug this, compositor does not send us any information about the error, just that it failed :disappointed: We have a few hyprland users, while it's possible that you are the only one on hyprland-git, I suspect the issue might be somewhere with Nvidia, though I cannot imagine where...
Perhaps one thing we could try: could you install vulkan-validation-layers package and then run these commands:
export VK_INSTANCE_LAYERS=VK_LAYER_KHRONOS_validation
export VK_LOADER_DEBUG=all
cargo run
If you see some errors, please share!
Running `target/debug/wluma`
INFO: Vulkan Loader Version 1.3.295
[2024-11-14T13:05:04Z DEBUG wluma] Using Config {
als: Time {
thresholds: {
11: "normal",
9: "dim",
16: "normal",
13: "bright",
20: "night",
7: "dark",
0: "night",
18: "dark",
},
},
output: [
Backlight(
BacklightOutput {
name: "eDP-1",
path: "/sys/class/backlight/nvidia_0",
capturer: Wayland(
Any,
),
min_brightness: 1,
},
),
],
}
[2024-11-14T13:05:04Z DEBUG wluma::brightness::backlight] Using direct write on /sys/class/backlight/nvidia_0 to change brightness value
[2024-11-14T13:05:04Z INFO wluma] Continue adjusting brightness and wluma will learn your preference over time.
[2024-11-14T13:05:04Z DEBUG wluma::frame::capturer::wayland] Detected support for wlr-screencopy-unstable-v1 protocol
[2024-11-14T13:05:04Z DEBUG wluma::frame::capturer::wayland] Detected support for linux-dmabuf-v1 protocol
[2024-11-14T13:05:04Z DEBUG wluma::frame::capturer::wayland] Using output 'Najing CEC Panda FPD Technology CO. ltd 0x004D (eDP-1)' for config 'eDP-1'
[2024-11-14T13:05:04Z DEBUG wluma::frame::capturer::wayland] Using wlr-screencopy-unstable-v1 protocol to request frames
LAYER: Searching for implicit layer manifest files
LAYER: In following locations:
LAYER: /home/minicx/.config/vulkan/implicit_layer.d
LAYER: /etc/xdg/vulkan/implicit_layer.d
LAYER: /etc/vulkan/implicit_layer.d
LAYER: /home/minicx/.local/share/vulkan/implicit_layer.d
LAYER: /usr/local/share/vulkan/implicit_layer.d
LAYER: /usr/share/vulkan/implicit_layer.d
LAYER: Found the following files:
LAYER: /usr/share/vulkan/implicit_layer.d/nvidia_layers.json
INFO: Found manifest file /usr/share/vulkan/implicit_layer.d/nvidia_layers.json (file version 1.0.1)
LAYER: Searching for explicit layer manifest files
LAYER: In following locations:
LAYER: /home/minicx/.config/vulkan/explicit_layer.d
LAYER: /etc/xdg/vulkan/explicit_layer.d
LAYER: /etc/vulkan/explicit_layer.d
LAYER: /home/minicx/.local/share/vulkan/explicit_layer.d
LAYER: /usr/local/share/vulkan/explicit_layer.d
LAYER: /usr/share/vulkan/explicit_layer.d
LAYER: Found no files
DRIVER: Searching for driver manifest files
DRIVER: In following locations:
DRIVER: /home/minicx/.config/vulkan/icd.d
DRIVER: /etc/xdg/vulkan/icd.d
DRIVER: /etc/vulkan/icd.d
DRIVER: /home/minicx/.local/share/vulkan/icd.d
DRIVER: /usr/local/share/vulkan/icd.d
DRIVER: /usr/share/vulkan/icd.d
DRIVER: Found the following files:
DRIVER: /usr/share/vulkan/icd.d/nvidia_icd.json
DRIVER: Found ICD manifest file /usr/share/vulkan/icd.d/nvidia_icd.json, version 1.0.1
DEBUG | DRIVER: Searching for ICD drivers named libGLX_nvidia.so.0
WARNING | LAYER: env var 'VK_INSTANCE_LAYERS' defined and adding layers "VK_LAYER_KHRONOS_validation"
ERROR | LAYER: Layer "VK_LAYER_KHRONOS_validation" was not found but was requested by env var VK_INSTANCE_LAYERS!
WARNING | LAYER: env var 'VK_INSTANCE_LAYERS' defined and adding layers "VK_LAYER_KHRONOS_validation"
ERROR | LAYER: Layer "VK_LAYER_KHRONOS_validation" was not found but was requested by env var VK_INSTANCE_LAYERS!
LAYER: vkCreateInstance layer callstack setup to:
LAYER: <Application>
LAYER: ||
LAYER: <Loader>
LAYER: ||
LAYER: <Drivers>
INFO | DRIVER: linux_read_sorted_physical_devices:
INFO | DRIVER: Original order:
INFO | DRIVER: [0] NVIDIA GeForce RTX 3050 Laptop GPU
INFO | DRIVER: Sorted order:
INFO | DRIVER: [0] NVIDIA GeForce RTX 3050 Laptop GPU
INFO | DRIVER: linux_read_sorted_physical_devices:
INFO | DRIVER: Original order:
INFO | DRIVER: [0] NVIDIA GeForce RTX 3050 Laptop GPU
INFO | DRIVER: Sorted order:
INFO | DRIVER: [0] NVIDIA GeForce RTX 3050 Laptop GPU
DEBUG | DRIVER: Copying old device 0 into new device 0
DRIVER | LAYER: vkCreateDevice layer callstack setup to:
DRIVER | LAYER: <Application>
DRIVER | LAYER: ||
DRIVER | LAYER: <Loader>
DRIVER | LAYER: ||
DRIVER | LAYER: <Device>
DRIVER | LAYER: Using "NVIDIA GeForce RTX 3050 Laptop GPU" with driver: "libGLX_nvidia.so.0"
[2024-11-14T13:05:05Z DEBUG wluma::frame::capturer::wayland] Frame copy failed
[2024-11-14T13:05:06Z DEBUG wluma::frame::capturer::wayland] Frame copy failed
[2024-11-14T13:05:07Z DEBUG wluma::frame::capturer::wayland] Frame copy failed
Unfortunately I'm really not sure how to debug this, compositor does not send us any information about the error, just that it failed 😞 We have a few hyprland users, while it's possible that you are the only one on hyprland-git, I suspect the issue might be somewhere with Nvidia, though I cannot imagine where...
Perhaps one thing we could try: could you install vulkan-validation-layers package and then run these commands:
export VK_INSTANCE_LAYERS=VK_LAYER_KHRONOS_validation
export VK_LOADER_DEBUG=all
cargo run
If you see some errors, please share!
Im using hyprland-git
Thanks for trying, unfortunately there's nothing that could point to an issue :thinking: Does https://github.com/russelltg/wl-screenrec/ work for you? It uses similar approach to capture frames.
Thanks for trying, unfortunately there's nothing that could point to an issue 🤔 Does https://github.com/russelltg/wl-screenrec/ work for you? It uses similar approach to capture frames.
Opening libva device from DRM device /dev/dri/renderD128
[AVHWFramesContext @ 0x563978c43ec0] Failed to create surface: 14 (the requested RT Format is not supported).
[AVHWFramesContext @ 0x563978c43ec0] Unable to allocate a surface from internal buffer pool.
failed to create encoder(s): Failed to create vaapi frame context for capture surfaces of format BGRZ 1920x1080
thread 'main' panicked at src/main.rs:904:51:
enc left in intermediate state
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
| gharchive/issue | 2024-11-13T13:02:36 | 2025-04-01T06:44:55.061015 | {
"authors": [
"loss-and-quick",
"maximbaz"
],
"repo": "maximbaz/wluma",
"url": "https://github.com/maximbaz/wluma/issues/125",
"license": "ISC",
"license_type": "permissive",
"license_source": "github-api"
} |
2605524553 | Missing metrics
Hi,
I'm missing the backup metrics from mgob even tough my backup succeed
curl http://localhost:8090/metrics
# HELP go_gc_duration_seconds A summary of the pause duration of garbage collection cycles.
# TYPE go_gc_duration_seconds summary
go_gc_duration_seconds{quantile="0"} 5.0806e-05
go_gc_duration_seconds{quantile="0.25"} 5.0806e-05
go_gc_duration_seconds{quantile="0.5"} 5.0806e-05
go_gc_duration_seconds{quantile="0.75"} 5.0806e-05
go_gc_duration_seconds{quantile="1"} 5.0806e-05
go_gc_duration_seconds_sum 5.0806e-05
go_gc_duration_seconds_count 1
# HELP go_goroutines Number of goroutines that currently exist.
# TYPE go_goroutines gauge
go_goroutines 10
# HELP go_info Information about the Go environment.
# TYPE go_info gauge
go_info{version="go1.21.5"} 1
# HELP go_memstats_alloc_bytes Number of bytes allocated and still in use.
# TYPE go_memstats_alloc_bytes gauge
go_memstats_alloc_bytes 2.712728e+06
# HELP go_memstats_alloc_bytes_total Total number of bytes allocated, even if freed.
# TYPE go_memstats_alloc_bytes_total counter
go_memstats_alloc_bytes_total 3.514784e+06
# HELP go_memstats_buck_hash_sys_bytes Number of bytes used by the profiling bucket hash table.
# TYPE go_memstats_buck_hash_sys_bytes gauge
go_memstats_buck_hash_sys_bytes 1.447208e+06
# HELP go_memstats_frees_total Total number of frees.
# TYPE go_memstats_frees_total counter
go_memstats_frees_total 9512
# HELP go_memstats_gc_sys_bytes Number of bytes used for garbage collection system metadata.
# TYPE go_memstats_gc_sys_bytes gauge
go_memstats_gc_sys_bytes 3.324032e+06
# HELP go_memstats_heap_alloc_bytes Number of heap bytes allocated and still in use.
# TYPE go_memstats_heap_alloc_bytes gauge
go_memstats_heap_alloc_bytes 2.712728e+06
# HELP go_memstats_heap_idle_bytes Number of heap bytes waiting to be used.
# TYPE go_memstats_heap_idle_bytes gauge
go_memstats_heap_idle_bytes 2.629632e+06
# HELP go_memstats_heap_inuse_bytes Number of heap bytes that are in use.
# TYPE go_memstats_heap_inuse_bytes gauge
go_memstats_heap_inuse_bytes 4.775936e+06
# HELP go_memstats_heap_objects Number of allocated objects.
# TYPE go_memstats_heap_objects gauge
go_memstats_heap_objects 4387
# HELP go_memstats_heap_released_bytes Number of heap bytes released to OS.
# TYPE go_memstats_heap_released_bytes gauge
go_memstats_heap_released_bytes 2.228224e+06
# HELP go_memstats_heap_sys_bytes Number of heap bytes obtained from system.
# TYPE go_memstats_heap_sys_bytes gauge
go_memstats_heap_sys_bytes 7.405568e+06
# HELP go_memstats_last_gc_time_seconds Number of seconds since 1970 of last garbage collection.
# TYPE go_memstats_last_gc_time_seconds gauge
go_memstats_last_gc_time_seconds 1.7295915429722137e+09
# HELP go_memstats_lookups_total Total number of pointer lookups.
# TYPE go_memstats_lookups_total counter
go_memstats_lookups_total 0
# HELP go_memstats_mallocs_total Total number of mallocs.
# TYPE go_memstats_mallocs_total counter
go_memstats_mallocs_total 13899
# HELP go_memstats_mcache_inuse_bytes Number of bytes in use by mcache structures.
# TYPE go_memstats_mcache_inuse_bytes gauge
go_memstats_mcache_inuse_bytes 19200
# HELP go_memstats_mcache_sys_bytes Number of bytes used for mcache structures obtained from system.
# TYPE go_memstats_mcache_sys_bytes gauge
go_memstats_mcache_sys_bytes 31200
# HELP go_memstats_mspan_inuse_bytes Number of bytes in use by mspan structures.
# TYPE go_memstats_mspan_inuse_bytes gauge
go_memstats_mspan_inuse_bytes 121800
# HELP go_memstats_mspan_sys_bytes Number of bytes used for mspan structures obtained from system.
# TYPE go_memstats_mspan_sys_bytes gauge
go_memstats_mspan_sys_bytes 130368
# HELP go_memstats_next_gc_bytes Number of heap bytes when next garbage collection will take place.
# TYPE go_memstats_next_gc_bytes gauge
go_memstats_next_gc_bytes 4.194304e+06
# HELP go_memstats_other_sys_bytes Number of bytes used for other system allocations.
# TYPE go_memstats_other_sys_bytes gauge
go_memstats_other_sys_bytes 1.757248e+06
# HELP go_memstats_stack_inuse_bytes Number of bytes in use by the stack allocator.
# TYPE go_memstats_stack_inuse_bytes gauge
go_memstats_stack_inuse_bytes 950272
# HELP go_memstats_stack_sys_bytes Number of bytes obtained from system for stack allocator.
# TYPE go_memstats_stack_sys_bytes gauge
go_memstats_stack_sys_bytes 950272
# HELP go_memstats_sys_bytes Number of bytes obtained from system.
# TYPE go_memstats_sys_bytes gauge
go_memstats_sys_bytes 1.5045896e+07
# HELP go_threads Number of OS threads created.
# TYPE go_threads gauge
go_threads 13
# HELP process_cpu_seconds_total Total user and system CPU time spent in seconds.
# TYPE process_cpu_seconds_total counter
process_cpu_seconds_total 0.03
# HELP process_max_fds Maximum number of open file descriptors.
# TYPE process_max_fds gauge
process_max_fds 1.048576e+06
# HELP process_open_fds Number of open file descriptors.
# TYPE process_open_fds gauge
process_open_fds 10
# HELP process_resident_memory_bytes Resident memory size in bytes.
# TYPE process_resident_memory_bytes gauge
process_resident_memory_bytes 1.8481152e+07
# HELP process_start_time_seconds Start time of the process since unix epoch in seconds.
# TYPE process_start_time_seconds gauge
process_start_time_seconds 1.72959153994e+09
# HELP process_virtual_memory_bytes Virtual memory size in bytes.
# TYPE process_virtual_memory_bytes gauge
process_virtual_memory_bytes 1.274417152e+09
# HELP process_virtual_memory_max_bytes Maximum amount of virtual memory available in bytes.
# TYPE process_virtual_memory_max_bytes gauge
process_virtual_memory_max_bytes 1.8446744073709552e+19
# HELP promhttp_metric_handler_requests_in_flight Current number of scrapes being served.
# TYPE promhttp_metric_handler_requests_in_flight gauge
promhttp_metric_handler_requests_in_flight 1
# HELP promhttp_metric_handler_requests_total Total number of scrapes by HTTP status code.
# TYPE promhttp_metric_handler_requests_total counter
promhttp_metric_handler_requests_total{code="200"} 2
promhttp_metric_handler_requests_total{code="500"} 0
promhttp_metric_handler_requests_total{code="503"} 0
From the backup log file
2024-10-22T09:53:26.296+0000 writing tester.Tester to archive '/tmp/tester-1729590806.gz'
2024-10-22T09:53:26.296+0000 done dumping tester.meetings-report-queue (0 documents)
2024-10-22T09:53:26.296+0000 done dumping tester.potential-meetings-reports (0 documents)
2024-10-22T09:53:26.296+0000 done dumping tester.reports (1 document)
2024-10-22T09:53:26.296+0000 writing tester.identifiers to archive '/tmp/tester-1729590806.gz'
2024-10-22T09:53:26.296+0000 writing tester.credits to archive '/tmp/tester-1729590806.gz'
2024-10-22T09:53:26.297+0000 done dumping tester.Tester (0 documents)
2024-10-22T09:53:26.298+0000 done dumping tester.credits (0 documents)
2024-10-22T09:53:26.298+0000 done dumping tester.identifiers (0 documents)
From the container's log
mgob | time="2024-10-22T09:53:26Z" level=info msg="On demand backup started" plan=tester
mgob | time="2024-10-22T09:53:26Z" level=debug msg="dump cmd: mongodump --archive=/tmp/tester-1729590806.gz --gzip --host mongo --port 27017 -u \"${MONGODB_ADMIN_USERNAME}\" -p xxxx --db tester --authenticationDatabase admin " plan=tester
mgob | time="2024-10-22T09:53:26Z" level=info msg="new dump" archive=/tmp/tester-1729590806.gz err="<nil>" mlog=/tmp/tester-1729590806.log plan=tester
mgob | time="2024-10-22T09:53:26Z" level=debug msg="applying retention to *.gz* files"
mgob | time="2024-10-22T09:53:26Z" level=debug msg="applying retention to *.log files"
mgob | time="2024-10-22T09:53:26Z" level=info msg="Local backup finished filename:`/tmp/tester-1729590806.gz`, filepath:`/storage/tester/tester-1729590806.gz`, Duration: 2.145033ms" plan=tester
mgob | time="2024-10-22T09:53:26Z" level=info msg="S3 upload finished `/tmp/tester-1729590806.gz` -> `tester/tester-backups/tester/tester-1729590806.gz` Total: 151.76 KiB, Transferred: 151.76 KiB, Speed: 1.09 MiB/s " plan=tester
mgob | time="2024-10-22T09:53:26Z" level=info msg="Clean up temp finished Temp folder cleanup finished, `/tmp/tester-1729590806.gz` is removed." plan=tester
mgob | time="2024-10-22T09:53:26Z" level=info msg="On demand backup finished in 302.806218ms archive tester-1729590806.gz size 155 kB" plan=tester
Excerpt from the docker-compose.yml file
mgob:
<<: *service-defaults
image: "maxisam/mgob:${MGOB_VERSION}"
container_name: 'mgob'
depends_on:
# Don't start before mongo is up and running.
mongo:
condition: 'service_healthy'
ports:
- "${MGOB_PORT}:${MGOB_PORT}"
command:
- '--LogLevel=debug'
environment:
MONGODB_ADMIN_USERNAME: "${MONGODB_ADMIN_USERNAME}"
MONGODB_ADMIN_PASSWORD: "${MONGODB_ADMIN_PASSWORD}"
MGOB_BUCKET_NAME_AND_PATH: "${MGOB_BUCKET_NAME_AND_PATH}"
volumes:
- "${MGOB_CONFIG}:/config"
- "${MGOB_DATA}:/data"
- "${MGOB_STORAGE}:/storage"
- "${MGOB_TMP}:/tmp"
mgob version: 2.0.24
If any information is missing or required, let me know, I'll collect what I can
Thanks
I think it only works for schedule backup. If you trigger it from api, i don't think it will work
@maxisam you right, scheduler store mgob metrics scheduler.go: 116
| gharchive/issue | 2024-10-22T13:49:58 | 2025-04-01T06:44:55.073019 | {
"authors": [
"boomfly",
"maxisam",
"reamasesa"
],
"repo": "maxisam/mgob",
"url": "https://github.com/maxisam/mgob/issues/133",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
638958436 | tabletojson is insecure and out of date
$ mkdir table-scraper-test
$ cd table-scraper-test
$ npm init -y
Wrote to...
$ npm i table-scraper
=== npm audit security report ===
# Run npm update is-my-json-valid --depth 5 to resolve 2 vulnerabilities
┌───────────────┬──────────────────────────────────────────────────────────────┐
│ Moderate │ Regular Expression Denial of Service │
├───────────────┼──────────────────────────────────────────────────────────────┤
│ Package │ is-my-json-valid │
├───────────────┼──────────────────────────────────────────────────────────────┤
│ Dependency of │ table-scraper │
├───────────────┼──────────────────────────────────────────────────────────────┤
│ Path │ table-scraper > tabletojson > request > har-validator > │
│ │ is-my-json-valid │
├───────────────┼──────────────────────────────────────────────────────────────┤
│ More info │ https://npmjs.com/advisories/76 │
└───────────────┴──────────────────────────────────────────────────────────────┘
┌───────────────┬──────────────────────────────────────────────────────────────┐
│ Low │ Regular Expression Denial of Service │
├───────────────┼──────────────────────────────────────────────────────────────┤
│ Package │ is-my-json-valid │
├───────────────┼──────────────────────────────────────────────────────────────┤
│ Dependency of │ table-scraper │
├───────────────┼──────────────────────────────────────────────────────────────┤
│ Path │ table-scraper > tabletojson > request > har-validator > │
│ │ is-my-json-valid │
├───────────────┼──────────────────────────────────────────────────────────────┤
│ More info │ https://npmjs.com/advisories/572 │
└───────────────┴──────────────────────────────────────────────────────────────┘
# Run npm update sshpk --depth 5 to resolve 1 vulnerability
┌───────────────┬──────────────────────────────────────────────────────────────┐
│ High │ Regular Expression Denial of Service │
├───────────────┼──────────────────────────────────────────────────────────────┤
│ Package │ sshpk │
├───────────────┼──────────────────────────────────────────────────────────────┤
│ Dependency of │ table-scraper │
├───────────────┼──────────────────────────────────────────────────────────────┤
│ Path │ table-scraper > tabletojson > request > http-signature > │
│ │ sshpk │
├───────────────┼──────────────────────────────────────────────────────────────┤
│ More info │ https://npmjs.com/advisories/606 │
└───────────────┴──────────────────────────────────────────────────────────────┘
┌──────────────────────────────────────────────────────────────────────────────┐
│ Manual Review │
│ Some vulnerabilities require your attention to resolve │
│ │
│ Visit https://go.npm.me/audit-guide for additional guidance │
└──────────────────────────────────────────────────────────────────────────────┘
┌───────────────┬──────────────────────────────────────────────────────────────┐
│ Moderate │ Regular Expression Denial of Service │
├───────────────┼──────────────────────────────────────────────────────────────┤
│ Package │ hawk │
├───────────────┼──────────────────────────────────────────────────────────────┤
│ Patched in │ >=3.1.3 < 4.0.0 || >=4.1.1 │
├───────────────┼──────────────────────────────────────────────────────────────┤
│ Dependency of │ table-scraper │
├───────────────┼──────────────────────────────────────────────────────────────┤
│ Path │ table-scraper > tabletojson > request > hawk │
├───────────────┼──────────────────────────────────────────────────────────────┤
│ More info │ https://npmjs.com/advisories/77 │
└───────────────┴──────────────────────────────────────────────────────────────┘
┌───────────────┬──────────────────────────────────────────────────────────────┐
│ Moderate │ ReDoS via long string of semicolons │
├───────────────┼──────────────────────────────────────────────────────────────┤
│ Package │ tough-cookie │
├───────────────┼──────────────────────────────────────────────────────────────┤
│ Patched in │ >=2.3.0 │
├───────────────┼──────────────────────────────────────────────────────────────┤
│ Dependency of │ table-scraper │
├───────────────┼──────────────────────────────────────────────────────────────┤
│ Path │ table-scraper > tabletojson > request > tough-cookie │
├───────────────┼──────────────────────────────────────────────────────────────┤
│ More info │ https://npmjs.com/advisories/130 │
└───────────────┴──────────────────────────────────────────────────────────────┘
┌───────────────┬──────────────────────────────────────────────────────────────┐
│ High │ Regular Expression Denial of Service │
├───────────────┼──────────────────────────────────────────────────────────────┤
│ Package │ tough-cookie │
├───────────────┼──────────────────────────────────────────────────────────────┤
│ Patched in │ >=2.3.3 │
├───────────────┼──────────────────────────────────────────────────────────────┤
│ Dependency of │ table-scraper │
├───────────────┼──────────────────────────────────────────────────────────────┤
│ Path │ table-scraper > tabletojson > request > tough-cookie │
├───────────────┼──────────────────────────────────────────────────────────────┤
│ More info │ https://npmjs.com/advisories/525 │
└───────────────┴──────────────────────────────────────────────────────────────┘
┌───────────────┬──────────────────────────────────────────────────────────────┐
│ Moderate │ Remote Memory Exposure │
├───────────────┼──────────────────────────────────────────────────────────────┤
│ Package │ request │
├───────────────┼──────────────────────────────────────────────────────────────┤
│ Patched in │ >=2.68.0 │
├───────────────┼──────────────────────────────────────────────────────────────┤
│ Dependency of │ table-scraper │
├───────────────┼──────────────────────────────────────────────────────────────┤
│ Path │ table-scraper > tabletojson > request │
├───────────────┼──────────────────────────────────────────────────────────────┤
│ More info │ https://npmjs.com/advisories/309 │
└───────────────┴──────────────────────────────────────────────────────────────┘
┌───────────────┬──────────────────────────────────────────────────────────────┐
│ Moderate │ Prototype Pollution │
├───────────────┼──────────────────────────────────────────────────────────────┤
│ Package │ hoek │
├───────────────┼──────────────────────────────────────────────────────────────┤
│ Patched in │ > 4.2.0 < 5.0.0 || >= 5.0.3 │
├───────────────┼──────────────────────────────────────────────────────────────┤
│ Dependency of │ table-scraper │
├───────────────┼──────────────────────────────────────────────────────────────┤
│ Path │ table-scraper > tabletojson > request > hawk > boom > hoek │
├───────────────┼──────────────────────────────────────────────────────────────┤
│ More info │ https://npmjs.com/advisories/566 │
└───────────────┴──────────────────────────────────────────────────────────────┘
┌───────────────┬──────────────────────────────────────────────────────────────┐
│ Moderate │ Prototype Pollution │
├───────────────┼──────────────────────────────────────────────────────────────┤
│ Package │ hoek │
├───────────────┼──────────────────────────────────────────────────────────────┤
│ Patched in │ > 4.2.0 < 5.0.0 || >= 5.0.3 │
├───────────────┼──────────────────────────────────────────────────────────────┤
│ Dependency of │ table-scraper │
├───────────────┼──────────────────────────────────────────────────────────────┤
│ Path │ table-scraper > tabletojson > request > hawk > cryptiles > │
│ │ boom > hoek │
├───────────────┼──────────────────────────────────────────────────────────────┤
│ More info │ https://npmjs.com/advisories/566 │
└───────────────┴──────────────────────────────────────────────────────────────┘
┌───────────────┬──────────────────────────────────────────────────────────────┐
│ Moderate │ Prototype Pollution │
├───────────────┼──────────────────────────────────────────────────────────────┤
│ Package │ hoek │
├───────────────┼──────────────────────────────────────────────────────────────┤
│ Patched in │ > 4.2.0 < 5.0.0 || >= 5.0.3 │
├───────────────┼──────────────────────────────────────────────────────────────┤
│ Dependency of │ table-scraper │
├───────────────┼──────────────────────────────────────────────────────────────┤
│ Path │ table-scraper > tabletojson > request > hawk > hoek │
├───────────────┼──────────────────────────────────────────────────────────────┤
│ More info │ https://npmjs.com/advisories/566 │
└───────────────┴──────────────────────────────────────────────────────────────┘
┌───────────────┬──────────────────────────────────────────────────────────────┐
│ Moderate │ Prototype Pollution │
├───────────────┼──────────────────────────────────────────────────────────────┤
│ Package │ hoek │
├───────────────┼──────────────────────────────────────────────────────────────┤
│ Patched in │ > 4.2.0 < 5.0.0 || >= 5.0.3 │
├───────────────┼──────────────────────────────────────────────────────────────┤
│ Dependency of │ table-scraper │
├───────────────┼──────────────────────────────────────────────────────────────┤
│ Path │ table-scraper > tabletojson > request > hawk > sntp > hoek │
├───────────────┼──────────────────────────────────────────────────────────────┤
│ More info │ https://npmjs.com/advisories/566 │
└───────────────┴──────────────────────────────────────────────────────────────┘
┌───────────────┬──────────────────────────────────────────────────────────────┐
│ Low │ Prototype Pollution │
├───────────────┼──────────────────────────────────────────────────────────────┤
│ Package │ lodash │
├───────────────┼──────────────────────────────────────────────────────────────┤
│ Patched in │ >=4.17.5 │
├───────────────┼──────────────────────────────────────────────────────────────┤
│ Dependency of │ table-scraper │
├───────────────┼──────────────────────────────────────────────────────────────┤
│ Path │ table-scraper > tabletojson > cheerio > lodash │
├───────────────┼──────────────────────────────────────────────────────────────┤
│ More info │ https://npmjs.com/advisories/577 │
└───────────────┴──────────────────────────────────────────────────────────────┘
┌───────────────┬──────────────────────────────────────────────────────────────┐
│ High │ Prototype Pollution │
├───────────────┼──────────────────────────────────────────────────────────────┤
│ Package │ lodash │
├───────────────┼──────────────────────────────────────────────────────────────┤
│ Patched in │ >=4.17.11 │
├───────────────┼──────────────────────────────────────────────────────────────┤
│ Dependency of │ table-scraper │
├───────────────┼──────────────────────────────────────────────────────────────┤
│ Path │ table-scraper > tabletojson > cheerio > lodash │
├───────────────┼──────────────────────────────────────────────────────────────┤
│ More info │ https://npmjs.com/advisories/782 │
└───────────────┴──────────────────────────────────────────────────────────────┘
┌───────────────┬──────────────────────────────────────────────────────────────┐
│ High │ Prototype Pollution │
├───────────────┼──────────────────────────────────────────────────────────────┤
│ Package │ lodash │
├───────────────┼──────────────────────────────────────────────────────────────┤
│ Patched in │ >=4.17.12 │
├───────────────┼──────────────────────────────────────────────────────────────┤
│ Dependency of │ table-scraper │
├───────────────┼──────────────────────────────────────────────────────────────┤
│ Path │ table-scraper > tabletojson > cheerio > lodash │
├───────────────┼──────────────────────────────────────────────────────────────┤
│ More info │ https://npmjs.com/advisories/1065 │
└───────────────┴──────────────────────────────────────────────────────────────┘
┌───────────────┬──────────────────────────────────────────────────────────────┐
│ Moderate │ Memory Exposure │
├───────────────┼──────────────────────────────────────────────────────────────┤
│ Package │ bl │
├───────────────┼──────────────────────────────────────────────────────────────┤
│ Patched in │ >=0.9.5 <1.0.0 || >=1.0.1 │
├───────────────┼──────────────────────────────────────────────────────────────┤
│ Dependency of │ table-scraper │
├───────────────┼──────────────────────────────────────────────────────────────┤
│ Path │ table-scraper > tabletojson > request > bl │
├───────────────┼──────────────────────────────────────────────────────────────┤
│ More info │ https://npmjs.com/advisories/596 │
└───────────────┴──────────────────────────────────────────────────────────────┘
┌───────────────┬──────────────────────────────────────────────────────────────┐
│ Moderate │ Memory Exposure │
├───────────────┼──────────────────────────────────────────────────────────────┤
│ Package │ tunnel-agent │
├───────────────┼──────────────────────────────────────────────────────────────┤
│ Patched in │ >=0.6.0 │
├───────────────┼──────────────────────────────────────────────────────────────┤
│ Dependency of │ table-scraper │
├───────────────┼──────────────────────────────────────────────────────────────┤
│ Path │ table-scraper > tabletojson > request > tunnel-agent │
├───────────────┼──────────────────────────────────────────────────────────────┤
│ More info │ https://npmjs.com/advisories/598 │
└───────────────┴──────────────────────────────────────────────────────────────┘
┌───────────────┬──────────────────────────────────────────────────────────────┐
│ Moderate │ Out-of-bounds Read │
├───────────────┼──────────────────────────────────────────────────────────────┤
│ Package │ stringstream │
├───────────────┼──────────────────────────────────────────────────────────────┤
│ Patched in │ >=0.0.6 │
├───────────────┼──────────────────────────────────────────────────────────────┤
│ Dependency of │ table-scraper │
├───────────────┼──────────────────────────────────────────────────────────────┤
│ Path │ table-scraper > tabletojson > request > stringstream │
├───────────────┼──────────────────────────────────────────────────────────────┤
│ More info │ https://npmjs.com/advisories/664 │
└───────────────┴──────────────────────────────────────────────────────────────┘
┌───────────────┬──────────────────────────────────────────────────────────────┐
│ Moderate │ Prototype Pollution │
├───────────────┼──────────────────────────────────────────────────────────────┤
│ Package │ extend │
├───────────────┼──────────────────────────────────────────────────────────────┤
│ Patched in │ >=2.0.2 <3.0.0 || >=3.0.2 │
├───────────────┼──────────────────────────────────────────────────────────────┤
│ Dependency of │ table-scraper │
├───────────────┼──────────────────────────────────────────────────────────────┤
│ Path │ table-scraper > tabletojson > request > extend │
├───────────────┼──────────────────────────────────────────────────────────────┤
│ More info │ https://npmjs.com/advisories/996 │
└───────────────┴──────────────────────────────────────────────────────────────┘
┌───────────────┬──────────────────────────────────────────────────────────────┐
│ High │ Insufficient Entropy │
├───────────────┼──────────────────────────────────────────────────────────────┤
│ Package │ cryptiles │
├───────────────┼──────────────────────────────────────────────────────────────┤
│ Patched in │ >=4.1.2 │
├───────────────┼──────────────────────────────────────────────────────────────┤
│ Dependency of │ table-scraper │
├───────────────┼──────────────────────────────────────────────────────────────┤
│ Path │ table-scraper > tabletojson > request > hawk > cryptiles │
├───────────────┼──────────────────────────────────────────────────────────────┤
│ More info │ https://npmjs.com/advisories/1464 │
└───────────────┴──────────────────────────────────────────────────────────────┘
┌───────────────┬──────────────────────────────────────────────────────────────┐
│ High │ Prototype Pollution Protection Bypass │
├───────────────┼──────────────────────────────────────────────────────────────┤
│ Package │ qs │
├───────────────┼──────────────────────────────────────────────────────────────┤
│ Patched in │ >=6.0.4 <6.1.0 || >=6.1.2 <6.2.0 || >=6.2.3 <6.3.0 || │
│ │ >=6.3.2 │
├───────────────┼──────────────────────────────────────────────────────────────┤
│ Dependency of │ table-scraper │
├───────────────┼──────────────────────────────────────────────────────────────┤
│ Path │ table-scraper > tabletojson > request > qs │
├───────────────┼──────────────────────────────────────────────────────────────┤
│ More info │ https://npmjs.com/advisories/1469 │
└───────────────┴──────────────────────────────────────────────────────────────┘
found 20 vulnerabilities (2 low, 12 moderate, 6 high) in 238 scanned packages
run `npm audit fix` to fix 3 of them.
17 vulnerabilities require manual review. See the full report for details.
These are all caused by dependence on the out of date tabletojson@0.4.0. This package should be updated.
Thanks for the heads up @realprogrammer5000, just bumped to tabletojson@2.0.4.
@maxthyen could you upload a new version to NPM too?
@realprogrammer5000 done!
| gharchive/issue | 2020-06-15T15:49:31 | 2025-04-01T06:44:55.090986 | {
"authors": [
"maxthyen",
"realprogrammer5000"
],
"repo": "maxthyen/table-scraper",
"url": "https://github.com/maxthyen/table-scraper/issues/11",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
596292379 | expose log errors as events
ProblemStatement: As an admin, I would like to expose log errors from any binary running as a Pod as K8s events. I want to avoid viewing of logs.
One of the solutions can be to setup one or more custom resource(s) against this binary that in turn would expose the errors found in logs as events.
update: It might be useful to just tail the logs or std output of a pod & generate a k8s event
| gharchive/issue | 2020-04-08T04:01:56 | 2025-04-01T06:44:55.098620 | {
"authors": [
"AmitKumarDas"
],
"repo": "mayadata-io/d-operators",
"url": "https://github.com/mayadata-io/d-operators/issues/34",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1895518141 | Edge浏览器web-ui显示有问题
在edge浏览器(windows10)上,web-ui显示有问题,无法填写RPC信息。
哪个版本?
哪个版本?
版本 1.3.6 (以前的1.2.2也是这样) 。不过很奇怪,我在windows10直接双击运行 index.html,edge可以正常显示。
哪个版本?
版本 1.3.6 (包括标准版、单文件版) (以前的1.2.2也是这样) 。不过很奇怪,我在windows10直接双击运行 index.html,edge可以正常显示,但部署到nginx或者tomcat服务器上,edge打开就有问题。
哪个版本?
版本 1.3.6 (包括标准版、单文件版) (以前的1.2.2也是这样) 。不过很奇怪,我在windows10直接双击运行 index.html,edge可以正常显示,但部署到nginx或者tomcat服务器上,edge打开就有问题。
你看看是不是有静态文件没有加载成功
哪个版本?
版本 1.3.6 (包括标准版、单文件版) (以前的1.2.2也是这样) 。不过很奇怪,我在windows10直接双击运行 index.html,edge可以正常显示,但部署到nginx或者tomcat服务器上,edge打开就有问题。
你看看是不是有静态文件没有加载成功
静态文件加载都成功的,就RPC没有配置报错,chrome与firefox浏览器正常的。
edge:
chrome:
那是看看 ariana.mayswind.net/latest 是否有问题,以及是否有浏览器插件导致的
ariang.mayswind.net/latest
访问 ariang.mayswind.net/latest 也是同样的问题,我有空把插件全部关闭了试一试。
找到问题原因了,是 剪藏 这个插件引起的,关闭了就正常。剪藏 插件地址 :https://microsoftedge.microsoft.com/addons/detail/剪藏/hpgdpiddfjclcnngdlgibdfengaofemo
| gharchive/issue | 2023-09-14T02:00:30 | 2025-04-01T06:44:55.115880 | {
"authors": [
"gexiaopeng",
"mayswind"
],
"repo": "mayswind/AriaNg",
"url": "https://github.com/mayswind/AriaNg/issues/731",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
237459253 | Adding the reaction to attachment from SS is not working correctly
Reactions to attachments aren't reflected to official slack client
And reaction layout is also different from the official client.
Related to #79.
https://github.com/mazun/SlackStream/pull/79/files#diff-fb829e304f97e6108fa42d608726a349R189
As noted in above line's comment, Slack Stream treats file and original comment for the file as a single "comment", while official client treats these as separated one. This may causes such curious situation.
| gharchive/issue | 2017-06-21T08:48:46 | 2025-04-01T06:44:55.131915 | {
"authors": [
"KongariBug",
"mazun"
],
"repo": "mazun/SlackStream",
"url": "https://github.com/mazun/SlackStream/issues/99",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
1962159784 | Documentation on running mbari/ecs-autoscale
Need documentation on how to run the ECS stack with the docker image mbari/ecs-autoscale
Here is the plan
### Tasks
- [x] Add docs to mkdocs @danellecline
- [ ] Test command on the deployment machine doris with the mbari315k model @duane-edgington
Documentation added to https://docs.mbari.org/deepsea-ai/commands/ecsdeploy/
@duane-edgington please confirm if this doc is valid. Feel free to make any changes necessary to the docs if needed, then close the ticket when an optional comment when you are done testing.
| gharchive/issue | 2023-10-25T20:15:37 | 2025-04-01T06:44:55.135012 | {
"authors": [
"danellecline"
],
"repo": "mbari-org/deepsea-ai",
"url": "https://github.com/mbari-org/deepsea-ai/issues/26",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
642416077 | ILiteCollection.Upsert(T) [BUG]
Version
5.08
Describe the bug
ILiteCollection.Upsert(T) fails if Android app is closed (removed from the recent apps screen). Some upsert sometimes gets eventually saved to disc, but most often the previous version of the record/document will be read at app start.
LiteDatabase.Checkpoint() after each Upsert(.) helps as a workaround.
Othervise nice lib ;)
@Stillfreeusername What's probably happening is that the app is being closed before the Upsert transaction is completely finished, so all the data written to the log file is discarded the next time.
The ideal solution would be to call Dispose() from an app shutdown event in Android, but I'm not sure if that's even possible. Calling Checkpoint() after every operation definitely works, but it's slow. An alternative would be lowering the Checkpoint pragma, so that the Checkpoint operation runs more frequently (try something like db.CheckpointSize = 50, default value is 1000).
„so all the data written to the log file is discarded the next time” but the next time the app starts, the container still gives me the old data…
Unfortunately Android can kill an app anytime, therefore calling Dispose() or relying on periodic saving is like playing Russian roulette :-/
| gharchive/issue | 2020-06-20T17:36:41 | 2025-04-01T06:44:55.138090 | {
"authors": [
"Stillfreeusername",
"lbnascimento"
],
"repo": "mbdavid/LiteDB",
"url": "https://github.com/mbdavid/LiteDB/issues/1722",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
100756390 | Huge Memory & CPU Usage
Hello,
I compared three versions of a test application. I saved a lot (10^7) objects of a type with three properties of type double.
The first version was coded with saving to a file with json serialization (Newtonsoft Json). The second was with litedb. The third with simply saving to a txt-File (File.WriteAllLines).
Code LiteDB:
// Open database (or create if not exits)
using(var db = new LiteDatabase(@".\LiteDB.db"))
{
// Get customer collection
var col = db.GetCollection<Node>("nodes");
col.Insert(nodes);
}
The results are:
Version
Memory Usage
Disk Space Usage
Time Needed [ms]
Newtonsoft Json
~500MB
673MB
68,630ms
LiteDB
~6 675MB
3 476MB
497,464ms
Text File
~500MB
950MB
38,913ms
Is there a faster way of saving bigger datas?
Hi @KeKl, thanks for your comment.
Performance and memory use drops in LiteDB when you are working with huge mass of data. This reason is because LiteDB implement a full transaction and journal backup save. Transactions are memory only (thats why use too many memory) and journal write in disk twice.
To works fast with huge data in LiteDB, try to use:
using(var db = new LiteDatabase(@"filename=.\LiteDB.db; journal=false"))
{
var col = db.GetCollection<Node>("nodes");
col.InsertBulk(nodes);
}
nodes var must returns and IEnumerable<Node> and will be better if uses yield return
This code must run fast, but never as Newtonsoft or Text File. LiteDB implements indexes (at least _id index), data in pages, and has a much more complex data structure.
I have some plans to remove some features (like transactions and journals) in prior be simple, less memory usage and faster.
Hi @mbdavid,
thank you for your answer.
I tested also your implementation. It´s faster.
Thank you!
Regards,
Kevin
| gharchive/issue | 2015-08-13T12:26:10 | 2025-04-01T06:44:55.144653 | {
"authors": [
"KeKl",
"mbdavid"
],
"repo": "mbdavid/LiteDB",
"url": "https://github.com/mbdavid/LiteDB/issues/48",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1645586558 | ARCHE:4921-Define black/whitel lists of models/solutions
Extend the mps-gradle generator plugin with the possibility to define black/whitel lists of models/solutions similar like the modelchecking plugin
Please also update changelog and increase minor version number.
This PR has been replaced by #5.
| gharchive/pull-request | 2023-03-29T11:18:09 | 2025-04-01T06:44:55.160522 | {
"authors": [
"HeikoBecker",
"giorgosbekas1995",
"sergej-koscejev"
],
"repo": "mbeddr/mps-build-backends",
"url": "https://github.com/mbeddr/mps-build-backends/pull/4",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
225260327 | Gestion des groupes
[x] Groupes créés en base (demande aux admins)
[x] Le propriétaire diffuse le code
[x] Validation par les administrateurs pour les utilisateurs n'ayant pas payé
Le créateur peut ajouter du contenu
#164
| gharchive/issue | 2017-04-29T14:28:07 | 2025-04-01T06:44:55.170934 | {
"authors": [
"mbernardeau",
"naustra"
],
"repo": "mbernardeau/Road-to-Russia-2018",
"url": "https://github.com/mbernardeau/Road-to-Russia-2018/issues/7",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
100558162 | Add support for segmentation in Analyzer
This is somewhat related to #16.
Would it be possible to include the class at http://www.programming-magic.com/20080816010106 ?
It is about 23KB and self contained (no dictionnary, no mecab).
Analyzer::segment($string) could be a simple wrapper to that class segment method.
Any thoughts ?
Thanks @Akeru for bringing this. I wasn't aware of that library. Result isn't 100% accurate but it does a pretty good job for a tiny piece of code :smile:
The original library seems to be written in Javascript http://chasen.org/~taku/software/TinySegmenter/ And I see various PHP implementations on it on Github: https://goo.gl/an6vHm
Why not just implement our own PHP version of this library, that way it will be easier to manage.
Yes, claimed accuracy is 95% on newspaper articles. Going to hack arround a bit.
PR #54
| gharchive/issue | 2015-08-12T14:39:24 | 2025-04-01T06:44:55.174499 | {
"authors": [
"Akeru",
"mbilbille"
],
"repo": "mbilbille/jpnforphp",
"url": "https://github.com/mbilbille/jpnforphp/issues/53",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
1070280382 | Bucket Fill option:
Bucket Fill option:
This is a great app, but I am really missing the option to fill an area with a single colour.
Thanks! Due to how the program works (it's based on vectors rather than pixels) it's impossible to implement a traditional bucket fill operation. So i'm closing this.
| gharchive/issue | 2021-12-03T06:59:41 | 2025-04-01T06:44:55.199595 | {
"authors": [
"RickWillcox",
"mbrlabs"
],
"repo": "mbrlabs/Lorien",
"url": "https://github.com/mbrlabs/Lorien/issues/92",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
728989595 | System.Runtime.InteropServices.COMException: OpenClipboard failed (Exception from HRESULT: 0x800401D0 (CLIPBRD_E_CANT_OPEN)) (in v3.0.272)
Describe the bug
A clear and concise description of what the bug is.
To Reproduce
Steps to reproduce the behavior:
Go to '...'
Click on '....'
Scroll down to '....'
See error
System Information
OS: [e.g. Windows 10 2004]
Game version [e.g. Stream, GOG and please include the CS]
Version of the editor: [e.g. 3.0.347]
Additional context
Please also attach a save file that exhibits the issue.
Can you fill in the bug report form?
| gharchive/issue | 2020-10-25T10:29:38 | 2025-04-01T06:44:55.211160 | {
"authors": [
"mburbea",
"programmerzlp"
],
"repo": "mburbea/koar-item-editor",
"url": "https://github.com/mburbea/koar-item-editor/issues/85",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
54669237 | CppCheck: Exception thrown in destructor
These problems only applies to debug build configuration, perhaps, they are not critical.
cppcheck --force src/ClientHandle.cpp src/WebAdmin.cpp
Checking src/ClientHandle.cpp...
Checking src/ClientHandle.cpp: ANDROID_NDK...
Checking src/ClientHandle.cpp: GetFreeSpace;_WIN32...
Checking src/ClientHandle.cpp: SELF_TEST...
Checking src/ClientHandle.cpp: TEST_GLOBALS...
Checking src/ClientHandle.cpp: TEST_GLOBALS;_DEBUG;_MSC_VER;_WIN32...
[src/ClientHandle.cpp:105]: (error) Exception thrown in destructor.
Checking src/ClientHandle.cpp: TEST_GLOBALS;_WIN32...
Checking src/ClientHandle.cpp: _DEBUG...
Checking src/ClientHandle.cpp: _MSC_VER...
Checking src/ClientHandle.cpp: _WIN32...
Checking src/ClientHandle.cpp: _WIN32;_WIN64;__GNUC__...
Checking src/ClientHandle.cpp: _WIN32;__GNUC__...
Checking src/ClientHandle.cpp: __GNUC__...
Checking src/ClientHandle.cpp: __clang__...
1/2 files checked 81% done
Checking src/WebAdmin.cpp...
Checking src/WebAdmin.cpp: ANDROID_NDK...
Checking src/WebAdmin.cpp: GetFreeSpace;_WIN32...
Checking src/WebAdmin.cpp: SELF_TEST...
Checking src/WebAdmin.cpp: TEST_GLOBALS...
Checking src/WebAdmin.cpp: TEST_GLOBALS;_DEBUG;_MSC_VER;_WIN32...
[src/WebAdmin.cpp:58]: (error) Exception thrown in destructor.
Checking src/WebAdmin.cpp: TEST_GLOBALS;_WIN32...
Checking src/WebAdmin.cpp: _DEBUG...
Checking src/WebAdmin.cpp: _MSC_VER...
Checking src/WebAdmin.cpp: _WIN32...
Checking src/WebAdmin.cpp: _WIN32;_WIN64;__GNUC__...
Checking src/WebAdmin.cpp: _WIN32;__GNUC__...
Checking src/WebAdmin.cpp: __GNUC__...
Checking src/WebAdmin.cpp: __clang__...
2/2 files checked 100% done
It only happens in TEST_GLOBALS mode, an option that those files are not compiled with. Its because our test system uses exceptions to check for asserts.
Can you explain, why do you use exceptions to check for assertions? Is it because of file logging?
Because one of the tests checks that certain asserts fire. It does this by making asserts throw exceptions in when compiled into a test and then catching the exception.
| gharchive/issue | 2015-01-17T17:54:08 | 2025-04-01T06:44:55.222282 | {
"authors": [
"UltraCoderRU",
"worktycho"
],
"repo": "mc-server/MCServer",
"url": "https://github.com/mc-server/MCServer/issues/1706",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
162932408 | New fixture - Proel-PLLEDMLBG
http://www.qlcplus.org/forum/viewtopic.php?f=3&t=10144
The creator has added an 8-channel, 8-bit mode not mentioned in the manual. I left it in.
Hi Robert
I confirm the presence of 8 channel mode, even if there's no tables specifing that.
The following information is provided in the manual at page 19:
" CHSL
CHSL is defined as Channels Switch which total are two option include ON & OFF. When it is in “ON” position, the light is maximum channels. When it shows “OFF”, the light is the last two channels. 16 Bit is useless.
"
| gharchive/pull-request | 2016-06-29T13:58:41 | 2025-04-01T06:44:55.227267 | {
"authors": [
"boxy321",
"citofono"
],
"repo": "mcallegari/qlcplus",
"url": "https://github.com/mcallegari/qlcplus/pull/873",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
2420038802 | Improve filter/search in extension listing
Currently not possible to filter by author name, only n-by GitHub username/organisation.
This seems to be an issue with the EJS template:
https://github.com/quarto-dev/quarto-cli/issues/7639
This seems to be an issue with the EJS template:
https://github.com/quarto-dev/quarto-cli/issues/10398
| gharchive/issue | 2024-07-19T22:04:34 | 2025-04-01T06:44:55.231283 | {
"authors": [
"mcanouil"
],
"repo": "mcanouil/quarto-extensions",
"url": "https://github.com/mcanouil/quarto-extensions/issues/63",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
75916063 | Nested scrollable (horizontal in vertical) problem on android device(chrome or any)
I have template included nested scrollable (horizontal(child) in vertical(parent) (code below)).
Problem is that when I (touch) scroll vertically on horizontal scrollable zone, parent scrollable don't be scrolled. ( I want parent scrollable to be scrolled vertically in this case)
On horizontal scrollable zone, only horizontal (touch)scrolling works.
( android device 4.4.2 - chrome or any browser, it works in PC or iPhone )
After debugging, I found that overthrow state is inactive in android device.
So I modify part of overthrow in mobile-angular-ui.js code.
var doc.., // line 1505 [mobile-angular-ui.js]
nativeOverflow = ~~;
nativeOverflow = false; // I add this temporarily. this cause init overthrow.support to "none", not "native"
and then works.
Is this overthrow bug?
(my android 4.4.2 and chrome lately)
Is there other problem when I force init overthrow.support to "none" ?
Example code
SCSS
@import 'lib/mobile-angular-ui-base';
.top-content {
height:300px;
}
.horizontal-content {
height:150px;
position:relative;
.scrollable {
height:150px;
}
.scrollable-content {
white-space:nowrap;
.image {
display:inline-block;
width:200px;
margin-right:5px;
height:100%;
background-color:green;
}
}
}
.bottom-content {
height:500px;
}
HTML
<div class="app">
<div class="app-body">
<div class="app-content">
<div class="scrollable">
<div class="scrollable-content main">
<div class="top-content">
</div>
<div class="horizontal-content">
<div class="scrollable">
<div class="scrollable-content">
<div class="image" ng-repeat="i in [1,2,3,4,5]"></div>
</div>
</div>
</div>
<div class="bottom-content">
</div>
</div>
</div>
</div>
</div>
</div>
P.S
Sorry for my english T^T
If forcing nativeOverflow to be false in android, browser scrolling is stiff. :(
I am also facing the same problem. I made an Hybrid Application( For Android) with mobile Angular UI demo. All the features are working well except vertical scrolling.
Any solution.
@lernrr777 what does it means that vertical scroll does not work. Is it related to nested scroll area, or even plain scroll area?
@honeyl3oom Yours is a tricky case, i don't think overthrow was intended to address it. Anyway we can discuss the proper behaviour and try to fix it in MAUI, or submit a PR to Overthrow.js
Regarding this i need you to answer some questions: does the same happens on IOS?
| gharchive/issue | 2015-05-13T09:28:06 | 2025-04-01T06:44:55.237471 | {
"authors": [
"honeyl3oom",
"lernrr777",
"mcasimir"
],
"repo": "mcasimir/mobile-angular-ui",
"url": "https://github.com/mcasimir/mobile-angular-ui/issues/254",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
1368529413 | 🛑 ChatGenie Client API is down
In 72ab11d, ChatGenie Client API ($CHATGENIE_CLIENT_API) was down:
HTTP code: 0
Response time: 0 ms
Resolved: ChatGenie Client API is back up in cbe4d21.
| gharchive/issue | 2022-09-10T06:58:02 | 2025-04-01T06:44:55.254324 | {
"authors": [
"mcdave029"
],
"repo": "mcdave029/uptime-monitoring",
"url": "https://github.com/mcdave029/uptime-monitoring/issues/10104",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1475488129 | 🛑 ChatGenie Client API is down
In 0a14dff, ChatGenie Client API ($CHATGENIE_CLIENT_API) was down:
HTTP code: 0
Response time: 0 ms
Resolved: ChatGenie Client API is back up in e68170e.
| gharchive/issue | 2022-12-05T02:48:08 | 2025-04-01T06:44:55.256512 | {
"authors": [
"mcdave029"
],
"repo": "mcdave029/uptime-monitoring",
"url": "https://github.com/mcdave029/uptime-monitoring/issues/11464",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1533187202 | 🛑 ChatGenie Client API is down
In 14c180d, ChatGenie Client API ($CHATGENIE_CLIENT_API) was down:
HTTP code: 0
Response time: 0 ms
Resolved: ChatGenie Client API is back up in 3593279.
| gharchive/issue | 2023-01-14T08:51:41 | 2025-04-01T06:44:55.259282 | {
"authors": [
"mcdave029"
],
"repo": "mcdave029/uptime-monitoring",
"url": "https://github.com/mcdave029/uptime-monitoring/issues/12509",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1758413958 | 🛑 ChatGenie Client API is down
In 2c32e66, ChatGenie Client API ($CHATGENIE_CLIENT_API) was down:
HTTP code: 0
Response time: 0 ms
Resolved: ChatGenie Client API is back up in ad104f3.
| gharchive/issue | 2023-06-15T09:23:55 | 2025-04-01T06:44:55.261549 | {
"authors": [
"mcdave029"
],
"repo": "mcdave029/uptime-monitoring",
"url": "https://github.com/mcdave029/uptime-monitoring/issues/17644",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1870828300 | 🛑 ChatGenie Client API is down
In c30dfd1, ChatGenie Client API ($CHATGENIE_CLIENT_API) was down:
HTTP code: 0
Response time: 0 ms
Resolved: ChatGenie Client API is back up in 8742cf9 after 9 minutes.
| gharchive/issue | 2023-08-29T03:46:30 | 2025-04-01T06:44:55.264415 | {
"authors": [
"mcdave029"
],
"repo": "mcdave029/uptime-monitoring",
"url": "https://github.com/mcdave029/uptime-monitoring/issues/20226",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1910352643 | 🛑 ChatGenie Client API is down
In 29a013b, ChatGenie Client API ($CHATGENIE_CLIENT_API) was down:
HTTP code: 0
Response time: 0 ms
Resolved: ChatGenie Client API is back up in 7e1a710 after 9 minutes.
| gharchive/issue | 2023-09-24T19:32:09 | 2025-04-01T06:44:55.266786 | {
"authors": [
"mcdave029"
],
"repo": "mcdave029/uptime-monitoring",
"url": "https://github.com/mcdave029/uptime-monitoring/issues/21019",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.