id stringlengths 4 10 | text stringlengths 4 2.14M | source stringclasses 2
values | created timestamp[s]date 2001-05-16 21:05:09 2025-01-01 03:38:30 | added stringdate 2025-04-01 04:05:38 2025-04-01 07:14:06 | metadata dict |
|---|---|---|---|---|---|
1331099248 | nidcpower and nifgen have incorrect data type for Import/ExportAttributeConfigurationBuffer
In the MI ImportAttributeConfigurationBuffer and ExportAttributeConfigurationBuffer functions, the configuration parameter is an array of bytes. The bytes contain a UTF-8 JSON string, but the API documentation doesn't document this, so it's appropriate for grpc-device to treat this parameter as an array of bytes.
nidcpower.proto and nifgen.proto incorrectly represent this parameter as repeated fixed64 configuration.
nidmm.proto, nifake.proto, and niscope.proto correctly represent this parameter as bytes configuration.
AB#2108035
Both NI-DCPower and NI-FGEN .proto files updated so those arguments are now bytes configuration.
NI-DCPower ImportAttributeConfigurationBufferRequest and ExportAttributeConfigurationBufferResponse
NI-FGEN ImportAttributeConfigurationBufferRequest and ExportAttributeConfigurationBufferResponse
| gharchive/issue | 2022-08-07T19:15:43 | 2025-04-01T06:45:08.717357 | {
"authors": [
"bkeryan",
"reckenro"
],
"repo": "ni/grpc-device",
"url": "https://github.com/ni/grpc-device/issues/694",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1278572018 | Create script to check if Linux RT feed needs updating.
What does this Pull Request accomplish?
Fixes AB#2009217
Creates a script to detect if the Linux RT Feed needs updating. The script will be supplied with a list of files that have changed and determine if any of them warrant updating the grpc-device version being pulled into Linux RT.
Why should this Pull Request be merged?
Stepping stone to be able to plug this into a GitHub Action to run on the releases branch so the developer creating a release knows if they need to update the Linux RT feed or not to point to the new grpc-device release.
What testing has been done?
Ran the script against several inputs to make sure it's working as intended:
PS ...\grpc-device> "source/custom/niscope_service.custom.cpp" | python source/codegen/validate-linux-rt.py ...\grpc-device\source\codegen\metadata\
Linux RT Feed likely needs updating.
PS ...\grpc-device> "source/custom/nirfsg_service.custom.cpp" | python source/codegen/validate-linux-rt.py ...\grpc-device\source\codegen\metadata\
Linux RT Feed should not need updating.
PS ...\grpc-device> git diff --name-only 46e1b9edd846505c21a9ee34bcdfc1689cc13cc9 HEAD | python source/codegen/validate-linux-rt.py ...\grpc-device\source\codegen\metadata\
Linux RT Feed likely needs updating.
PS ...\grpc-device>
Also tested piping in with a fake diff file to test more combinations.
Talked with @astarche offline about this and changed the approach. I'll reset both of you. The new approach is more akin to "non-RT driver changes" only or "non-RT driver changes and CMakeLists.txt change" only indicating updating the Linux RT Feed is likely not needed. Any changes outside those two scenarios probably warrants updating the Linux RT Feed.
| gharchive/pull-request | 2022-06-21T14:53:24 | 2025-04-01T06:45:08.720442 | {
"authors": [
"reckenro"
],
"repo": "ni/grpc-device",
"url": "https://github.com/ni/grpc-device/pull/671",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2295915186 | WaferMap intermittent test: "will have hover rectangle with no dimensions"
🧹 Tech Debt
Test failed in an unrelated PR build:
[test-concurrent:nimble-components] [test-firefox:verbose] WaferMap
[test-concurrent:nimble-components] [test-firefox:verbose] hover action with no canvas dimensions
[test-concurrent:nimble-components] [test-firefox:verbose] ✗ will have hover rectangle with no dimensions (57ms)
[test-concurrent:nimble-components] [test-firefox:verbose] Expected 460 to equal 0.
i think this can be closed?
Test no longer exists. Closing.
| gharchive/issue | 2024-05-14T16:34:28 | 2025-04-01T06:45:08.725238 | {
"authors": [
"m-akinc",
"munteannatan"
],
"repo": "ni/nimble",
"url": "https://github.com/ni/nimble/issues/2104",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1478839584 | Visual Design for basic table content types
😯 Problem to Solve
Provide designs for basic content types outside of what was already completed from the initial design
#883
In table cells
Numeric values (Related dev task: #1011)
Hyperlinks (Related dev task: #1012)
Icons (Related dev task: #1013 )
DateTime (Related dev task: #1014)
In table header cells
Icon
Text and Icon
💁 Proposed Solution
🤔 Open Questions
What size should the icons be?
Do we need default alignments for different data types?
🖼️ References/Prior Art
Design for basic table
#883
@RickA-NI here are a couple prototypes of right-aligned numeric columns. They reveal some interesting questions for you to provide guidance on:
For numeric columns, should we right align the values, both the values and the header, or neither?
If we right align the values, should we also set font-variant-numeric: tabular-nums to request that the browser render digits with equal widths? Does our font support this setting? (I couldn't see a difference when I played with it)
My initial reaction is that headers should match their data. I'm not sure what the ramifications are for all the stuff (sorting direction, menu icons, grab handles, etc.) we've talked about stick up there but visually it looks better and keeps the header from becoming dissociated from the data as in your example.
Source Sans Pro renders digits with equal widths by default. You shouldn't need to turn that on unless we're trying to force it into a monospaced like mode where the decimal places and commas take up equal space to a didgit.
I've always thought....strings to the left, numbers to the right...but it is challenging to visually connect data when you don't have borders in the headers to group columns, like we've got.
I found some blog on table design and it had three (and a half) rules for alignment that seem to jive pretty well with what we're all aligning on here:
Ahhhh...the good ole "some blog" :) Just kidding, the No.3 is new to me but does appear to be a pretty standard practice and something I've, admittedly, not thought about too much. Learn something new everyday. 🌈⭐️
No. 3 1/2 is interesting as their could be some cases (not necessarily our use cases) where centering the value along a separator might be preferable like the resolution and ratio columns in the image below. Not sure if modern CSS supports this well yet and probably isn't a priority for us right now, but you can learn more at: https://ux.stackexchange.com/questions/24066/what-is-the-best-practice-for-data-table-cell-content-alignment
Here's a link to another some blog that reiterates 1-3 of the simple rules to follow...with a fun little animation to show the impact. https://www.darkhorseanalytics.com/blog/clear-off-the-table/
how is that content aligned in the resolution and aspect ratio columns? I get the intent to align by the midpoint symbol (x or :) but the data in those columns is not left right or center aligned it's like some bespoke hand layout stuff. If you couldn't do a custom layout and had to pick one then I'd think matching the header to the one pick is still the right choice.
@RickA-NI, just for clarity, the aspect ratio column is using an existing CSS mechanism to provide that alignment behavior. Basically, CSS would look like this:
.resolution {
text-align: 'x' center;
}
.aspect-ratio {
text-align: '.' center;
}
aspect ratio column is using an existing CSS mechanism
Support for this particular CSS feature doesn't look so great. Looks like we still need vendor prefixes and a non-vendor prefix version might be deprecated/obsolete.
https://caniuse.com/mdn-css_properties_text-align_block_alignment_values
Support as of 02/02/2023
@jattasNI @nate-ni @leslieab I think I've got these recommendations ready to go. I just want to run them by everyone here before I add examples to the visual spec Storybook is pulling in.
These three rules I think are widely accepted norms and good guidelines for all column types:
Numeric data is right-aligned
Textual data is left-aligned
Headers are aligned with their data
For Numeric value columns I think the above guidelines and Jesse research has pretty much laid out what we do here. We right-align and use tabular numbers. The font is standard body copy. Source Sans Pro has tabular numbers by default so we get that for free.
For Hyperlinks columns we use our standard anchor link control. We treat them as textual data and left-align.
For Icon columns we can have icons with text or without. Either way we treat them as textual data and left-align. Text when needed is standard body copy and offset from icon using our standard sizing token.
DateTime columns are mostly numeric data but seem to be usually treated like textual data and left-aligned. Again we use our standard body copy font. There is a standard format set by ISO 8601 and suggested for use by W3C on how to format this data. We should look into adopting it at least as a default. Here is a summary:
For a more full breakdown on this standard you can see this page here written by the author.
https://www.cl.cam.ac.uk/~mgk25/iso-time.html
The only one that we may have to think more about is #3..."Headers are aligned with their data" specifically for right-aligned content (numbers). All other direction seems pretty solid to me. Since we intend to have controls and icons (for sorting) in the header, we may need to explore how that impacts that particular guidance.
As an example, GH is in a similar situation and has maintained left-alignment in their headers for numeric, right-aligned, columns. While this doesn't dictate our solution, it is a reference to see the impact of that solution.
I know we don't have final designs for those fully featured headers yet but the initial prototype may give us something to work from.
I agree, I think right-aligning headers is going to be problematic. And due to sort/grouping iconography and the menu button, the text will never vertically right align with row cell content anyway.
@leslieab just brought up the points I was about to. So...I have nothing. Gosh leslie!
Here are my column alignment mockups. I think this covers all of the relavent scenarios. Let me know if you like to see something else.
Thanks for posting these, @RickA-NI!! It is really helpful.
If our tables were static (non-interactive) displays of data...or maybe just didn't have multiple elements in the header; I think we would align the header content with the column content...but they aren't. I think we should start with keeping the header content left-aligned.
This might just be a place to start, I think this is a decision we can change relatively easy and with low impact.
Having all the controls in the same order across headers will also likely make things a little less complicated when we get to supporting keyboards and screen readers.
One more case to consider: would numeric data with units still be right aligned? We lose some benefit of right aligning if the units have inconsistent widths. e.g.:
Voltage
101.0V
1.3mV
25.7kV
Here are my column alignment mockups. I think this covers all of the relavent scenarios. Let me know if you like to see something else.
Sounds like @nate-ni's vote is to go with "Numeric headers left aligned, values right-aligned" to begin with. I'm ok with this as our initial approach but want to note that, to me, it feels a bit harder to associate the header with the data than other examples in this thread because of our lack of vertical lines separating the columns. While I recognize it's not my area of expertise or decision to make, is there anything we could do visually to increase the association between the column and its header in that case? e.g. Subtle background colors? Always showing the header dividers? @RickA-NI
We decided in a team meeting to start with "Numeric headers left aligned, values right-aligned". The concern above will likely be moot if users size their columns to match their data; there won't be large amouts of whitespace which cause headers to be far from their content. We can always revisit this if it turns out to be a faulty assumption.
Note from Rick in response to a question from Molly in a team meeting: if the content is mixed data type (e.g. an SLE tag column with strings and numbers) then it should be left aligned.
@jattasNI I know that this has been closed but I did come across an pretty indepth breakdown of how to format a table for scientific and technical publication. This is for print so there are some things we might do differently, but this section seems pertinent to your last question:
| gharchive/issue | 2022-12-06T11:09:07 | 2025-04-01T06:45:08.749230 | {
"authors": [
"RickA-NI",
"atmgrifter00",
"jattasNI",
"leslieab",
"nate-ni"
],
"repo": "ni/nimble",
"url": "https://github.com/ni/nimble/issues/887",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
772603627 | Add methods to move, copy and swap slots
:warning: PR 141 must be merged first into main.
[x] This contribution adheres to CONTRIBUTING.md.
TODO: Check the above box with an 'x' indicating you've read and followed CONTRIBUTING.md.
What does this Pull Request accomplish?
Adding methods to the scripting API to move, copy and swap slots. These make it easier for users to use existing modules in other slots without having to read and write them manually.
Why should this Pull Request be merged?
Adding methods to the scripting API to move, copy and swap slots. These make it easier for users to use existing modules in other slots without having to read and write them manually.
What testing has been done?
Ensured all the unit tests pass and added one for each function.
Providing feedback on the tool itself now, as even though the other VIs conflict, the tool can be reviewed independently.
[ ] Error 7, which occurs if the library path is invalid, seems like it should also be listed as a recoverable error.
[ ] Why is the 5xxx range listed as recoverable? Ahh, I see that you're using it for your own errors throughout the application. Probably worth mentioning in Error Handler.vi, like with Error 43.
[ ] Import from CSV.vi's icon claims it is Export to CSV.vi.
[ ] I'm not understanding the usage of "This VI" as the default value in Save Documentation to VIs.vi.
[ ] I don't believe you need to read/specify the VI path in Save Documentation to VIs.vi. Leaving the path unspecified should save it in place.
[ ] Clicking Load Library should unload any loaded VIs first. I loaded a few different libraries and observed that all previously loaded VIs stayed in memory, despite not appearing in the list of VIs.
I'm glad you aren't saving VIs which already match the description!
Providing feedback on the tool itself now, as even though the other VIs conflict, the tool can be reviewed independently.
[ ] Error 7, which occurs if the library path is invalid, seems like it should also be listed as a recoverable error.
[ ] Why is the 5xxx range listed as recoverable? Ahh, I see that you're using it for your own errors throughout the application. Probably worth mentioning in Error Handler.vi, like with Error 43.
[ ] Import from CSV.vi's icon claims it is Export to CSV.vi.
[ ] I'm not understanding the usage of "This VI" as the default value in Save Documentation to VIs.vi.
[ ] I don't believe you need to read/specify the VI path in Save Documentation to VIs.vi. Leaving the path unspecified should save it in place.
[ ] Clicking Load Library should unload any loaded VIs first. I loaded a few different libraries and observed that all previously loaded VIs stayed in memory, despite not appearing in the list of VIs.
I'm glad you aren't saving VIs which already match the description!
I'm not generally opposed to the creation of utility methods such as these, since it's possible they help solve a specific use case. That said, it'd be helpful to understand the context for when you expect these VIs to be used.
Is scripting the copying, swapping, or moving of modules without access to the program that scripted the original system definition something you expect to be common? If modules have changed locations, I imagine it would be easier to just update and re-run the original scripting code to use the new locations, rather than write a second program to patch them.
I'm not generally opposed to the creation of utility methods such as these, since it's possible they help solve a specific use case. That said, it'd be helpful to understand the context for when you expect these VIs to be used.
Is scripting the copying, swapping, or moving of modules without access to the program that scripted the original system definition something you expect to be common? If modules have changed locations, I imagine it would be easier to just update and re-run the original scripting code to use the new locations, rather than write a second program to patch them.
Swap Slot.vi (and by extension Move Slot.vi) appears to work in my minimal testing, but I'm worried the implementation is going to subtly break something. At best, the end result is confusing.
Up until now, there has been an invariant that the order of the slots in the system definition aligns with the values of the slot number properties. This change appears to violate that invariant, and I'm not sure where in the custom device that invariant might be relied on.
I created an instance of the custom device by hand, added a module to slot 1, and then tried to use Move Slot.vi to move that module to slot 2. The result can be seen below; it's still the first slot under the local chassis, and still named "Slot 1", but the associated property claims it is slot 2. The result can still be deployed (at least with only one module present), but it's obviously not intuitive behavior.
We should maintain the "system definition order is the slot order" invariant.
Swap Slot.vi (and by extension Move Slot.vi) appears to work in my minimal testing, but I'm worried the implementation is going to subtly break something. At best, the end result is confusing.
Up until now, there has been an invariant that the order of the slots in the system definition aligns with the values of the slot number properties. This change appears to violate that invariant, and I'm not sure where in the custom device that invariant might be relied on.
I created an instance of the custom device by hand, added a module to slot 1, and then tried to use Move Slot.vi to move that module to slot 2. The result can be seen below; it's still the first slot under the local chassis, and still named "Slot 1", but the associated property claims it is slot 2. The result can still be deployed (at least with only one module present), but it's obviously not intuitive behavior.
We should maintain the "system definition order is the slot order" invariant.
I'm gonna need to put some thought into this one. I would prefer to keep them if I can, but I'll check with @debryant and @Sarci1 the use cases, and consider if fixing the Slot naming would be too challenging.
I'm gonna need to put some thought into this one. I would prefer to keep them if I can, but I'll check with @debryant and @Sarci1 the use cases, and consider if fixing the Slot naming would be too challenging.
@rtzoeller I spoke with @debryant and we agreed that not all these utilities will be needed for our use case. Therefore, I am removing both Move Slot and Swap Slot, because they leave the system definition file in a bad state and we don't want to invest time right now in them.
@rtzoeller I spoke with @debryant and we agreed that not all these utilities will be needed for our use case. Therefore, I am removing both Move Slot and Swap Slot, because they leave the system definition file in a bad state and we don't want to invest time right now in them.
[ ] Set Slot.vi needs to be updated to take advantage of the new helper functions.
The rest of the changes look good.
[ ] Set Slot.vi needs to be updated to take advantage of the new helper functions.
The rest of the changes look good.
@oscarfonloz on further inspection, it's probably also worth renaming Swap Slot.vi to Swap Slots.vi, since it operates on two slots.
@oscarfonloz on further inspection, it's probably also worth renaming Swap Slot.vi to Swap Slots.vi, since it operates on two slots.
The Swap and Move VIs are removed from this PR. The Copy VI remains. Correct?
The Swap and Move VIs are removed from this PR. The Copy VI remains. Correct?
@debryant yes; thanks for catching that.
@debryant yes; thanks for catching that.
[x] Set Slot.vi needs to be updated to take advantage of the new helper functions.
The rest of the changes look good.
Updated, thanks!
[x] Set Slot.vi needs to be updated to take advantage of the new helper functions.
The rest of the changes look good.
Updated, thanks!
| gharchive/pull-request | 2020-12-22T03:14:33 | 2025-04-01T06:45:08.771322 | {
"authors": [
"debryant",
"oscarfonloz",
"rtzoeller"
],
"repo": "ni/niveristand-scan-engine-ethercat-custom-device",
"url": "https://github.com/ni/niveristand-scan-engine-ethercat-custom-device/pull/145",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
814564420 | Add FileService MinIO documentation
[x] This contribution adheres to CONTRIBUTING.md.
Adding the documentation on using MinIO as a storage provider for the SystemLink FileService.
The official documentation covers how to use Amazon S3 as a cloud storage provider, which is what our implementation primarily addresses. Beyond that, MinIO can also be used with a few minor tweaks. This, however, is not part of the documention on ni.com.
Here's a preview: https://github.com/ni/systemlink-operations-handbook/blob/69cbc455c0b57f6e57b9b728af01d22fb677d4c5/handbook/FileService-MinIO/FileService-MinIO.md
The "note"-block is not shown correctly in that preview, it'll look like this:
I've made a few changes to this review, addressing a couple of points mentioned by Mark. However, I dismissed a handful of requests that ask for a major change concerning GCS and other cloud storages. We will only address minio for the moment and update in future if required. This course of action was approved by Stefan Romainczyk.
I've made a few changes to this review, addressing a couple of points mentioned by Mark. However, I dismissed a handful of requests that ask for a major change concerning GCS and other cloud storages. We will only address minio for the moment and update in future if required. This course of action was approved by Stefan Romainczyk.
Hi @Thymu, given the constraints that keep us from supporting Azure blob or GCP I suggest we change the name of this document. Something like, Leverage File Service S3 for On-premises storage.
I'd also like this document to be moved into the data-stores directory rather than kept in its current directory.
Please make these couple of changes, and I will approve the PR.
Done :)
| gharchive/pull-request | 2021-02-23T15:21:59 | 2025-04-01T06:45:08.777189 | {
"authors": [
"Thymu"
],
"repo": "ni/systemlink-operations-handbook",
"url": "https://github.com/ni/systemlink-operations-handbook/pull/27",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1243972119 | Comments for "Sniffing TLS traffic on Android"
Comments made here will be shown on the "Sniffing TLS traffic on Android" article.
https://nibarius.github.io/learning-frida/2022/05/21/sniffing-tls-traffic
Great article! Very informative and simple approach - just makes sense. I was struggling to decrypt Android traffic for a while and when I read this I was like "Duh". Hahaha Thanks fro the article!
So, I've always used and preferred fiddler, so on the information from your Android 11 guide, and the fact I am using a physical device, I tried to get that working. Fiddler would only see and decrypt traffic from Chrome and not other applications (although interestingly I could see the SSL CONNECTs being logged.
Before I embark on using PolarProxy for the first time, I wonder if this is more a security feature of Android 13 and I'm not going to get any further. What do you think?
I haven't started using Android 13 myself yet, but I'm not aware of any particular security features on Android 13 that should make things more difficult than they were on Android 11. So I'm hoping things will work the same.
I have followed the article but can't get decrypted info on pcap file. One thing I don't clear is set up Access Point on Android to use our proxy. Do you guys know what is server IP and Port should I enter? Thank you!!
You should not use any proxy at all in the Access Point settings on Android. PolarProxy is a transparent proxy, so your Android phone doesn't know that it's talking to a proxy. It thinks it's making a normal request directly to the target server. It's the adb reverse and iptables rules that makes sure that the traffic is re-routed to the PolarProxy server on it's way to the remote server.
Oh thank you! I have thought that the problem is APN because after set up proxy, Android 11 can not access to the internet, I have created a same virtual Android 11 and it can access to internet. Are you facing internet problem on Android 11?
Unfortunately, I still can't get decrypted data. I have removed my custom APN. But still have default APN, so I use emulator setting and set it to No proxy.
| gharchive/issue | 2022-05-21T13:04:09 | 2025-04-01T06:45:08.788845 | {
"authors": [
"Rhynorater",
"nibarius",
"npendlington",
"tranxuanloc"
],
"repo": "nibarius/learning-frida",
"url": "https://github.com/nibarius/learning-frida/issues/25",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1974331760 | Investigate faster AVR floating point implementation
Currently the project uses the software floating point implementation provided by AVR GCC, however a faster implementation probably exists.
Check what options are available for XMEGA (or 8-bit systems).
Implement an option.
Perform basic tests to validate the performance.
Replace existing floating point usage with library (if it is faster).
I haven't started digging through the code yet, but I can't see any good reason there'd be any floats at all? Depending on how heavily their used it may not be impractical to just rewrite anything with floats. If there's particular pain points intermediate fixed point stuff can probably be written as shims until it's de-floated.
| gharchive/issue | 2023-11-02T14:03:51 | 2025-04-01T06:45:08.794663 | {
"authors": [
"VegaDeftwing",
"nic-starke"
],
"repo": "nic-starke/neon_samurai",
"url": "https://github.com/nic-starke/neon_samurai/issues/13",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2757785395 | Raspberry Pi 4B Package Versions
Hello,
I just tried to make that thing running on my raspberry pi 4b, unfortunately I do get a lot of issues over and over and I came to the conclusion that there might be a problem with the versions of the packages and python itself. I just wondered if I could install older versions of them to make it running... With what versions did you run it?
Thank you and Merry Christmas
Hi there
Yeah that should work out. This is the output of pip list for me:
Package Version
--------------------- ---------
absl-py 0.9.0
asn1crypto 0.24.0
astor 0.8.1
attrs 19.3.0
backcall 0.1.0
bleach 3.1.1
cachetools 4.0.0
certifi 2018.8.24
chardet 3.0.4
colorzero 1.1
cryptography 2.6.1
cycler 0.10.0
Cython 0.29.15
decorator 4.4.1
defusedxml 0.6.0
entrypoints 0.3
future 0.18.2
gast 0.2.2
google-auth 1.11.2
google-auth-oauthlib 0.4.1
google-pasta 0.1.8
gpiozero 1.5.1
grpcio 1.27.2
h5py 2.9.0
idna 2.6
importlib-metadata 1.5.0
ipykernel 5.1.4
ipython 7.12.0
ipython-genutils 0.2.0
ipywidgets 7.5.1
jedi 0.16.0
Jinja2 2.11.1
jsonschema 3.2.0
jupyter 1.0.0
jupyter-client 6.0.0
jupyter-console 6.1.0
jupyter-core 4.6.3
Keras-Applications 1.0.8
Keras-Preprocessing 1.1.0
keyring 17.1.1
keyrings.alt 3.1.1
kiwisolver 1.1.0
lxml 4.5.0
Markdown 3.2.1
MarkupSafe 1.1.1
matplotlib 3.1.3
mistune 0.8.4
mock 4.0.1
nbconvert 5.6.1
nbformat 5.0.4
notebook 6.0.3
numpy 1.16.2
oauthlib 3.1.0
opencv-contrib-python 4.1.0.25
opencv-python 4.1.1.26
opt-einsum 3.1.0
pandas 1.0.3
pandocfilters 1.4.2
parso 0.6.2
pexpect 4.8.0
picamera 1.13
pickleshare 0.7.5
Pillow 7.0.0
pip 20.2.4
prometheus-client 0.7.1
prompt-toolkit 3.0.3
protobuf 3.11.3
ptyprocess 0.6.0
pyasn1 0.4.8
pyasn1-modules 0.2.8
pybind11 2.5.0
pycrypto 2.6.1
Pygments 2.5.2
PyGObject 3.30.4
pyparsing 2.4.6
pyrsistent 0.15.7
python-apt 1.8.4.1
python-dateutil 2.8.1
python-telegram-bot 12.7
pytz 2019.3
pyxdg 0.25
pyzmq 19.0.0
qtconsole 4.6.0
requests 2.21.0
requests-oauthlib 1.3.0
RPi.GPIO 0.7.0
rpimotorlib 3.1
rsa 4.0
scipy 1.4.1
SecretStorage 2.3.1
Send2Trash 1.5.0
setuptools 45.2.0
six 1.14.0
ssh-import-id 5.7
telegram 0.0.1
tensorboard 2.0.2
tensorflow 1.14.0
tensorflow-estimator 1.14.0
termcolor 1.1.0
terminado 0.8.3
testpath 0.4.4
tornado 6.0.3
traitlets 4.3.3
urllib3 1.24.1
wcwidth 0.1.8
webencodings 0.5.1
Werkzeug 1.0.0
wheel 0.34.2
widgetsnbextension 3.5.1
wrapt 1.12.0
zipp 3.0.0
Creatign a container for this project would be a good thing as well.
Hi,
thank you for you fast response.
I now tried a few hours to get that thing running on my machine as well as in an container.
I would be really thankful if you could help me.
I've got the problem that I need to build that image on the Raspberry itself as I need picamera which I can only install when building on the machine itself. Otherwise I am getting this error (trying to build it on windows):
35.80 Running setup.py install for picamera: started
35.95 Running setup.py install for picamera: finished with status 'error'
35.96 error: subprocess-exited-with-error
35.96
35.96 × Running setup.py install for picamera did not run successfully.
35.96 │ exit code: 1
35.96 ╰─> [19 lines of output]
35.96 running install
35.96 Traceback (most recent call last):
35.96 File "<string>", line 36, in <module>
35.96 File "<pip-setuptools-caller>", line 34, in <module>
35.96 File "/tmp/pip-install-58t2c1fl/picamera_2d75c2f46a8b4d4295f5617ecc671aa8/setup.py", line 145, in <module>
35.96 main()
35.96 File "/tmp/pip-install-58t2c1fl/picamera_2d75c2f46a8b4d4295f5617ecc671aa8/setup.py", line 140, in main
35.96 cmdclass = {'install': CustomInstallCommand},
35.96 File "/usr/local/lib/python3.7/site-packages/setuptools/__init__.py", line 153, in setup
35.96 return distutils.core.setup(**attrs)
35.96 File "/usr/local/lib/python3.7/distutils/core.py", line 148, in setup
35.96 dist.run_commands()
35.96 File "/usr/local/lib/python3.7/distutils/dist.py", line 966, in run_commands
35.96 self.run_command(cmd)
35.96 File "/usr/local/lib/python3.7/distutils/dist.py", line 985, in run_command
35.96 cmd_obj.run()
35.96 File "/tmp/pip-install-58t2c1fl/picamera_2d75c2f46a8b4d4295f5617ecc671aa8/setup.py", line 111, in run
35.96 raise ValueError('Unable to determine if this system is a Raspberry Pi')
35.96 ValueError: Unable to determine if this system is a Raspberry Pi
35.96 [end of output]
My Dockerfile looks like this:
# Use Python 3.7 slim base image compatible with ARM architecture
FROM python:3.7-slim-buster
# Set working directory in the container
WORKDIR /app
# Install system dependencies required for TensorFlow
RUN apt-get update && apt-get install -y \
gcc \
g++ \
git \
&& rm -rf /var/lib/apt/lists/*
# Copy requirements file (if you have one)
COPY requirements.txt .
# Install TensorFlow and its models
# RUN pip install --no-cache-dir tensorflow @ https://files.pythonhosted.org/packages/f4/28/96efba1a516cdacc2e2d6d081f699c001d414cc8ca3250e6d59ae657eb2b/tensorflow-1.14.0-cp37-cp37m-manylinux1_x86_64.whl
# RUN git clone https://github.com/tensorflow/models.git tensorflow/models
# Detect if the system is a Raspberry Pi and install picamera only if true
RUN if [ -e /sys/firmware/devicetree/base/model ]; then \
pip install picamera==1.13; \
fi
RUN pip install --break-system-packages --no-cache-dir -r requirements.txt
# Add TensorFlow models to Python path using ENV (Docker's way)
ENV PYTHONPATH="${PYTHONPATH}:/app/tensorflow/models/research:/app/tensorflow/models/research/slim"
# Copy your project files
COPY . .
# Make the script executable
# COPY catCam_starter.sh .
# RUN chmod +x catCam_starter.sh
# Command to run your application
CMD ["python3", "cascade.py"]
And I am trying to get tensorflow 1.14 from this website:
[(https://files.pythonhosted.org/packages/f4/28/96efba1a516cdacc2e2d6d081f699c001d414cc8ca3250e6d59ae657eb2b/tensorflow-1.14.0-cp37-cp37m-manylinux1_x86_64.whl)]
as I am not able to install it via pip install tensorflow. Unfortunately this it not for arm64 so I am unable to install it, both in the container and locally, I couldn't find any other Version which could work to my knowledge.
Thank you!
Following xx from the readme you should be able to get the wheel from here: wget https://github.com/lhelontra/tensorflow-on-arm/releases/download/v1.8.0/tensorflow-1.8.0-cp35-none-linux_armv7l.whl
Yes you will need to build the container on the pi due to it being ARM.
| gharchive/issue | 2024-12-24T13:10:51 | 2025-04-01T06:45:08.810794 | {
"authors": [
"niciBume",
"schneiderLukas"
],
"repo": "niciBume/Cat_Prey_Analyzer",
"url": "https://github.com/niciBume/Cat_Prey_Analyzer/issues/33",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
776473139 | (#116) Avoid FILE directive in gain plugin example and docs
Per the discussion in #116, I'm proposing here that we avoid __FILE__ as it seems to have non-standard behavior. Instead, I'm using a preprocessor directive for the GainPlugin example, and I've updated the docs to alert new users that they will need to fill in the correct path to their bundle file.
Looks sensible to me sir!
Looks reasonable to me too!
| gharchive/pull-request | 2020-12-30T14:07:34 | 2025-04-01T06:45:08.813180 | {
"authors": [
"JoshMarler",
"nick-thompson",
"tomoyanonymous"
],
"repo": "nick-thompson/blueprint",
"url": "https://github.com/nick-thompson/blueprint/pull/197",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2741222174 | The &msc options show mouse movement events instead of scroll events
I am trying to create custom behavior for a rotary encoder and I have noticed that the latest mouse emulation features have been incorporated into the editor. However, mouse movement events are still being listed under &msc bahavior, instead of the expected scroll options.
Yeah, this one's a pain in the ass because the scroll and move behaviors are functionally the same (both are instances of the zmk,behavior-input-two-axis behavior) and even the different binding parameters really just boil down to different default distances.
I have one overlay for behaviors with zmk,behavior-input-two-axis and another for &msc with the scroll-specific treatment, but that means custom made behaviors intended for scrolling won't match. This is the only case where I'm still using this way to match so the error went unnoticed until now.
| gharchive/issue | 2024-12-16T03:22:53 | 2025-04-01T06:45:08.824802 | {
"authors": [
"ng-nicholas",
"nickcoutsos"
],
"repo": "nickcoutsos/keymap-editor",
"url": "https://github.com/nickcoutsos/keymap-editor/issues/274",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
421619258 | Match more interface types
currently, some of the built in rules will only match interface types that meet the following pattern:
parent: ^interface\s+GigabitEthernet0/[0-9]+$
However, IOS L3 devices have more interface naming conventions than this. I think it would be good to instead of having a long ass regex here for all the interface patterns, this be an alias to something like L3Interfaces that then that longer regex gets substituted for.
For example, on an ISR4331, the built in interfaces will be name:
interface GigabitEthernet0/0/0 - 2
The EHWIC L3 port first number will be higher.
Old 2800s will be name:
interface FastEthernet0/0
2900s will be named:
as above, GigabitEthernet0/[0-9]+$
I'm assuming as similar set of issues on the ASA side:
ASA5508s will be:
interface GigabitEthernet1/1
ASA5505s will be:
ASA5510s will be:
ASA5516s will be:
ASA5525s will be:
A more generic regex is probably the right answer, any kind of Ethernet interface \S+Ethernet\s+ or something like that, with any numbering scheme. Honestly the only reason we care about matching the word Ethernet is because of Ethernet-specific features like portfast, proxy-ARP, etc. The rest is irrelevant I think.
Agree
maybe something closer to
parent: ^interface\s+Ethernet/[0-9]+$
I think starting the line with interface is important though
Yes, I was suggesting something like:
parent: ^interface\s+\S*Ethernet[0-9/.]+$"
This would work on Nexus and IOSv where simply Ethernet is used.
| gharchive/issue | 2019-03-15T17:07:11 | 2025-04-01T06:45:08.841221 | {
"authors": [
"nickrusso42518",
"pdice"
],
"repo": "nickrusso42518/stig",
"url": "https://github.com/nickrusso42518/stig/issues/4",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
2658632748 | CORS Problem in API
When I send a request to the API via browser, I get a CORS error. However, when I send a GET request with Postman, it returns the response without any problems.
Thank you 🙌🙌
youre welcome. let me know if it works. if not this issue will be reopened. Thanks.
It should now work on all routes. I've deployed a fix. I'll close this issue for now. Please let me know if you still encounter the same problem.
| gharchive/issue | 2024-11-14T12:10:45 | 2025-04-01T06:45:08.844537 | {
"authors": [
"nickypangers",
"onurcanozovali"
],
"repo": "nickypangers/passport-visa-api",
"url": "https://github.com/nickypangers/passport-visa-api/issues/14",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2203855077 | Updated selenium to pull from v4.18.0
fix #4
Thank you for your PR but we have gone with a different PR
| gharchive/pull-request | 2024-03-23T12:19:25 | 2025-04-01T06:45:08.965467 | {
"authors": [
"AutomatedTester",
"urizennnn"
],
"repo": "nightwatchjs/selenium-server-jar-download",
"url": "https://github.com/nightwatchjs/selenium-server-jar-download/pull/6",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1048575347 | Denoising does not work well on large images
Even moderately sized images (I tested 1500x2500 png) had almost no denoising at -n 3.
Might be good to add an option for small, medium and large images as well as multi-pass.
Small images it works pretty good. Very fast.
For working with large images (2k-10k) it's pretty useless. Probably due to how the original non-vulkan version works, though I haven't tested it.
I can confirm this behaviour. If the input image has noise and a high pixel count, upscaling makes the noise noticeably more apparent—the results are comparable with the much faster lanczos upscaling used by image magic. (On my Ryzen 5 4600H, it takes 20–30s for waifu to complete the operation, compared to ~3s for image magic.) This is the image I am using:
| gharchive/issue | 2021-11-09T12:56:45 | 2025-04-01T06:45:08.997013 | {
"authors": [
"Alex-Bujorianu",
"KeygenLLC"
],
"repo": "nihui/waifu2x-ncnn-vulkan",
"url": "https://github.com/nihui/waifu2x-ncnn-vulkan/issues/163",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
884870209 | Add display configuration
Works like this:
@nikitinas @UnlikeMars, you may suggest another naming for dataFrameConfig if you wish
Was merged manually: 4d126c87e6273d52ca1a7c99110683e8ea44f19b
| gharchive/pull-request | 2021-05-10T18:52:11 | 2025-04-01T06:45:09.005377 | {
"authors": [
"ileasile"
],
"repo": "nikitinas/dataframe",
"url": "https://github.com/nikitinas/dataframe/pull/22",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2318295458 | Non-Monotonic Predictions
Hi Niklas and Ouail,
I am running into an issue where during training of these monotonic NNs, the predictions are not coming out monotonic whenever the Lipschitz constant is >1.
Specifically, I am training a model similar to the "RobustModel" in https://github.com/niklasnolte/MonotonicNetworks/blob/main/Examples/Examples_paper.ipynb, except with a Lipschitz constant of 100.
I attached the barebones code (below) and the associated training data (x.pt and gt.pt in data.zip) that reproduces this non-monotonic issue. The code will raise an exception whenever a prediction from the model is non-monotonic (usually around 100-200 epochs into training). Please let me know if I am missing something.
Thank you in advance for your time and help!
Code:
import monotonicnetworks as lmn
import torch
import torch.nn as nn
import torch.optim as optim
from tqdm import tqdm
class MonotonicNN(nn.Module):
def __init__(self):
super(MonotonicNN, self).__init__()
lnn = torch.nn.Sequential(
lmn.LipschitzLinear(1, 512, kind="one", lipschitz_const=100),
lmn.GroupSort(2),
lmn.LipschitzLinear(512, 512, kind="one", lipschitz_const=100),
lmn.GroupSort(2),
lmn.LipschitzLinear(512, 512, kind="one", lipschitz_const=100),
lmn.GroupSort(2),
lmn.LipschitzLinear(512, 1, kind="one", lipschitz_const=100),
)
self.nn = lmn.MonotonicWrapper(lnn, lipschitz_const=100)
self.loss = nn.MSELoss()
def forward(self, x):
x = self.nn(x)
return x
def is_monotonoic(self, ts):
for i in range(1,len(ts)):
if ts[i - 1] > ts[i]:
print(ts[i - 1], ts[i])
return False
return True
def step(self, x, gt):
y_hat = self.forward(x)
if not self.is_monotonoic(y_hat):
raise
loss = self.loss(y_hat, gt)
return loss
def training_step(self, x, gt):
self.train()
return self.step(x, gt)
# Training Loop
device = "cuda"
max_epochs = 100000
x = torch.load("x.pt").to(device)
gt = torch.load("gt.pt").to(device)
monotonic_model = MonotonicNN()
monotonic_model.to(device)
optimizer = optim.Adam(monotonic_model.parameters(), lr=1e-3)
for i in tqdm(range(0, max_epochs)):
optimizer.zero_grad()
loss = monotonic_model.training_step(x, gt)
loss.backward()
optimizer.step()
because each of your layers has a lipschitz constant of 100, the total lipschitz constant is 100^4, so your monotonic wrapper would need that as a lipschitz constant. Try 100**0.25 in each layer!
Each individual layer can essentially multiply its input by a factor of up to 100. Stacking them will lead to a very large Lipschitz constant for the entire network. What you'd want is 100^(1/d) for a network of depth d. Closing the issue. Feel free to open again if something else remains unclear.
| gharchive/issue | 2024-05-27T05:54:28 | 2025-04-01T06:45:09.014106 | {
"authors": [
"niklasnolte",
"okitouni",
"peterpaohuang"
],
"repo": "niklasnolte/MonotonicNetworks",
"url": "https://github.com/niklasnolte/MonotonicNetworks/issues/11",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
64374987 | Not hiding overflowed content
I'm having trouble hiding overflowed content, the captured image show the content from the top, not from the position the user was scrolling.
In Browser:
Generated Image
Any ideas how I can solve this issue?
Duplicate of #511. Feel free to follow that one and close this one (also make sure you're using the most recent version from the master branch, there was a partial fix a couple weeks ago).
| gharchive/issue | 2015-03-25T21:16:22 | 2025-04-01T06:45:09.016543 | {
"authors": [
"HeadacheMan",
"usmonster"
],
"repo": "niklasvh/html2canvas",
"url": "https://github.com/niklasvh/html2canvas/issues/560",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
128449608 | html2canvas-beta-4 not capturing entire screen
I'm been noticing consistent issues (in html2canvas 0.5-beta versions) with screenshot capturing when the page is very long vertically. What appears to happening is that only the portion of the page which is actually within range of the viewport is captured while the rest is cut off.
You might try making sure the page is scrolled to the top when calling html2canvas. I have this problem if I put the button at the bottom of the page, but if the button is at the top of the page and the user is scrolled up, all of the page is captured.
I'm using the html2canvas v.0.5.0-beta4 so I don't understand because not capturing full view. It capture only window size. Did you found a solution? Thanks
| gharchive/issue | 2016-01-25T03:00:38 | 2025-04-01T06:45:09.018316 | {
"authors": [
"TheJson",
"finetype",
"matthew-rister"
],
"repo": "niklasvh/html2canvas",
"url": "https://github.com/niklasvh/html2canvas/issues/774",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
170064029 | border-radius problem
@niklasvh
hello,when I use html2canvas find a border-radius problem.There are two pictures(the first is the original picture,the second is the screenshot one.),I use border-radius in arrow.
The version of html2canvas is 0.4.1.can you help to see how to solve it ?Thanks a lot.
I solved it through see another issue #318 .Grom-S's answer works well.
| gharchive/issue | 2016-08-09T01:45:38 | 2025-04-01T06:45:09.020386 | {
"authors": [
"xuwf"
],
"repo": "niklasvh/html2canvas",
"url": "https://github.com/niklasvh/html2canvas/issues/924",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1923561540 | created temperature converter in rust
Description
made a rust program which can convert temperature input from Celsius to Fahrenheit and Fahrenheit to Celsius depending on an option provided
Additional Context
Merged
This is an automated message from Fork, Commit, Merge [BOT].
Thank you for your contribution! Your pull request has been merged. The files have been reset for the next contributor.
What's next?
If you're looking for more ways to contribute, I invite you to check out my other projects. Just click here to find more. These projects contain real issues that you can help resolve. You can also check out the Influences section in the README to find more projects similar to this one.
Also please leave a star to this project if you feel it helped you, i would really appreciate it.
I look forward to seeing your contributions!
| gharchive/pull-request | 2023-10-03T08:09:31 | 2025-04-01T06:45:09.028006 | {
"authors": [
"nikohoffren",
"rojin254"
],
"repo": "nikohoffren/fork-commit-merge",
"url": "https://github.com/nikohoffren/fork-commit-merge/pull/847",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
924230806 | button not working on login page
i would like to make it working. @nilisha-jais
Go ahead @nitishapiplani
I would like to work on this issue.
Go ahead @RituCs
Sorry for the late.I was busy for final sem exam. Can you please tell exactly what I have to do with the button in Login Page?@niloysikdar @nilisha-jais
Sorry for the late.I was busy for final sem exam. Can you please tell exactly what I have to do with the button in Login Page?@niloysikdar @nilisha-jais
@niloysikdar, please have a look.
If it's working fine then we can close this issue.
Okay @harshita2216
| gharchive/issue | 2021-06-17T18:44:59 | 2025-04-01T06:45:09.047973 | {
"authors": [
"RituCs",
"harshita2216",
"nilisha-jais",
"niloysikdar",
"nitishapiplani"
],
"repo": "nilisha-jais/Musicophilia",
"url": "https://github.com/nilisha-jais/Musicophilia/issues/355",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
743281129 | Organization and auto-gen improvements to command doc
This PR covers an organization change and a step in the direction of further automation for the detailed nim command documentation. The current organization is support by a script in nimbella-cli (public). With a few exceptions, the documents being committed here are auto-generated and incomplete (they don't have the human-provided text yet).
Will the auto generation be documented?
Yes it should be documented but
I'm not sure where the documentation should go
I'd prefer to wait anyway until the process of absorbing changes and integrating the automated and non-automated steps is better worked out.
I'm going to merge this so that I can rebase it into another open PR. The documentation being merged is skeletal in places but it replaces a more monolithic document that was equally skeletal and there are certainly many other empty or incomplete documents.
| gharchive/pull-request | 2020-11-15T15:33:45 | 2025-04-01T06:45:09.093913 | {
"authors": [
"joshuaauerbachwatson"
],
"repo": "nimbella/docs",
"url": "https://github.com/nimbella/docs/pull/12",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
168816408 | WindowsError: [Error 2] The system cannot find the file specified
C:\Users\UserName\Desktop\ninja-master>python configure.py
Traceback (most recent call last):
File "configure.py", line 320, in
if platform.msvc_needs_fs():
File "configure.py", line 84, in msvc_needs_fs
stderr=subprocess.PIPE)
File "D:\ts_mirr\python\2.7.9_2\lib\subprocess.py", line 710, in init
errread, errwrite)
File "D:\ts_mirr\python\2.7.9_2\lib\subprocess.py", line 958, in _execute_chil
d
startupinfo)
WindowsError: [Error 2] The system cannot find the file specified
Looks like this is trying to run cl. We should make the error message better, but to solve your problem you should read the Windows build instructions: https://github.com/ninja-build/ninja/blob/master/HACKING.md#building-for-windows
C:\Program Files (x86)\Microsoft Visual Studio 14.0\Common7\IDE>python "c:\Users\UserName\Desktop\ninja-master\configure.py" --bootstrap
Traceback (most recent call last):
File "c:\Users\UserName\Desktop\ninja-master\configure.py", line 229, in
ninja_writer = ninja_syntax.Writer(open(BUILD_FILENAME, 'w'))
IOError: [Errno 13] Permission denied: 'build.ninja'
Running from visual studio environment would lead me to this error (the file build.ninja is not open). If i modify in the code of configure.py at line 228 and set the BUILD_FILENAME with absolute path, i came back to my previous error:
You want to be in the ninja directory when running it, otherwise it'll generate the build files in the VS directory, which isn't what you want.
So,
C:\Users\UserName\Desktop\ninja-master>"C:\Program Files (x86)\Microsoft Visual Studio 14.0\VC\vcvarsall.bat"
C:\Users\UserName\Desktop\ninja-master>python configure.py --bootstrap
Thank you! This worked. A suggestion would be to provide for the user a set of "fixes" for specific errors, or a stept by step instalation tutorial.
If you're just using ninja, you can also get the binaries from https://github.com/ninja-build/ninja/releases as described in the readme.
| gharchive/issue | 2016-08-02T07:22:00 | 2025-04-01T06:45:09.105519 | {
"authors": [
"adrya407",
"evmar",
"sgraham"
],
"repo": "ninja-build/ninja",
"url": "https://github.com/ninja-build/ninja/issues/1179",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
1422787267 | Reloading without aiming P.R.L. 412 crashes game
Validation
[x] Game has been updated to the latest RE4HD release (v1.1: https://www.re4hd.com/?p=9552)
[X] Data has been validated with SFV (a guide to using QuickSFV is available at that link, use that with this 1.1 SFV: BIO4-HDProject1.1-SFV.zip)
Describe your issue here (drag+drop ZIP to attach it)
When the "reload without aiming first" feature is active, pressing the reload button while the P.R.L. 412 is equipped (without aiming) crashes the game. This does not happen with other weapons that don't reload, such as rocket launchers.
bio4.exe.20221025114749.zip
Here's a comment in re4hd.com from another user about the same issue:
«
[...] Started a cleared game run on my professional save and got a P.R.L. and Handcannon from the store, except whenever I press R to reload on the P.R.L. the game insta-crashes. [...]
»
«
Tested it some more and it seems to only be a problem when “Allow reload without aiming” is turned on and I press R without aiming.
»
For the record ;D
Thanks!
Ah, hmm, gotta admit I didn't think about the PRL. No idea why it would just crash though, but I'll take a look soon.
Seems the crash was already fixed (probably in d6e789bfdb59d8eb2c05b6fc56a6924600367981 ?), but I still ended up making a small improvement.
Can you test this build, please?
https://github.com/nipkownix/re4_tweaks/suites/9090505610/artifacts/421213604
Works now. Thanks!
| gharchive/issue | 2022-10-25T16:58:20 | 2025-04-01T06:45:09.120595 | {
"authors": [
"albertre4HD",
"ddubs13",
"nipkownix"
],
"repo": "nipkownix/re4_tweaks",
"url": "https://github.com/nipkownix/re4_tweaks/issues/352",
"license": "Zlib",
"license_type": "permissive",
"license_source": "github-api"
} |
1684233923 | ENH: Improve same-user detection in Docker environments
Previously, when users were running multiple docker run processes, either in parallel or subsequently, each container is given a unique hostname, making our "consistent" UUID factory anything but. However, it seems /proc/self/mountinfo (and /proc/1/mountinfo) do contain some hashes of the filesystem, and are consistent between docker run invocations (at least with engine v20.10.14). This comes with a caveat though - spinning off a container from another image will change this hash, so consistency has only been shown when using the same image.
This will require a bit more testing before going live to better understand its efficacy/reliability.
To test
[x] Adding mounts (consistent)
[x] Removing mounts
[x] Downgrading Docker versions
Sorry. At a park on my phone rn.
| gharchive/pull-request | 2023-04-26T04:10:56 | 2025-04-01T06:45:09.123945 | {
"authors": [
"effigies",
"mgxd"
],
"repo": "nipreps/migas-py",
"url": "https://github.com/nipreps/migas-py/pull/33",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
193587806 | [idea] Plot mean normalized image intensity?
http://spinthatresonates.blogspot.com/2016/12/first-few-images-are-brighter-than-rest.html
MRIQC is to identify those nonsteady states and we have some metrics (and the carpetplot) to inform about that. We have initiated some related lines looking at signal drifts and coil failures. I think this suggestion can be closed for now (opened a discussion https://github.com/nipreps/mriqc/discussions/1341).
| gharchive/issue | 2016-12-05T19:28:08 | 2025-04-01T06:45:09.125805 | {
"authors": [
"chrisgorgo",
"oesteban"
],
"repo": "nipreps/mriqc",
"url": "https://github.com/nipreps/mriqc/issues/320",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
361906275 | Update dependencies for library and konvert to Kotlin for readability
Do you want to request a feature or report a bug?
Feature
What is the current behaviour?
As expected, but the code is quite long and can be simplified
If the current behaviour is a bug, please provide the steps to reproduce.
Any logs, error output, bug reports etc?
What is the expected behaviour?
Any other comments?
I can create a pull request in few weeks so you can merge it
What versions of software are you using?
Device Information:
Android Version:
Configuration Information:
Misc:
As an app, it makes sense to convert to Kotlin but as a library, it doesn't. Converting to Kotlin only means that now the library would have a transitive dependency on the Kotlin standard library. For people who have yet to move to Kotlin in their app's the library if added will only add bloat to their app.
On the other hand, this being in Java is completely useable in Kotlin world. However in the future when there are enough users of Kotlin on Android the library could be ported Kotlin. Until then I do not see any benefit in porting this to Kotlin.
Closing as this proposal doesn't bring any useful benefit to users of this library.
Oh my fault, saw those dependencies for sample app :D
| gharchive/issue | 2018-09-19T20:03:59 | 2025-04-01T06:45:09.151866 | {
"authors": [
"LukasAnda",
"nisrulz"
],
"repo": "nisrulz/packagehunter",
"url": "https://github.com/nisrulz/packagehunter/issues/2",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1106300915 | Error upon installation using react-native@0.64.2
Hi. I get the following error once I install the package and run react-native run-ios
ERROR Invariant Violation: Native module cannot be null.
ERROR Invariant Violation: Module AppRegistry is not a registered callable module (calling runApplication). A frequent cause of the error is that the application entry file path is incorrect.
This can also happen when the JS bundle is corrupt or there is an early initialization error when loading React Native.
ERROR Invariant Violation: Module AppRegistry is not a registered callable module (calling runApplication). A frequent cause of the error is that the application entry file path is incorrect.
This can also happen when the JS bundle is corrupt or there is an early initialization error when loading React Native.
I have tried the following script to solve it, but without success.
npm start --reset-cache && killall -9 node && rm -rf node_modules && npm install && cd ios && rm -rf Pods && pod cache clean --all && pod install && cd ..
But still I get the same error.
Do you have any suggestions on how I could solve it?
Thanks!
Hello, I a getting the same error as well
Can you please confirm you have installed the react-native-reanimated and react-native-svg modules. react-native-reanimated was introduced to this lib starting from v2.0.0
| gharchive/issue | 2022-01-17T22:16:33 | 2025-04-01T06:45:09.171388 | {
"authors": [
"africanfruit",
"erikvlarsson",
"nithinpp69"
],
"repo": "nithinpp69/react-native-circular-progress-indicator",
"url": "https://github.com/nithinpp69/react-native-circular-progress-indicator/issues/28",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
123895065 | expanded the nit_env.sh script to register nit within .bash_profile and .bashrc
When running source misc/nit_env.sh install I noticed that it resulted in the following error:
grep: User/user .profile: No such file or directory
The nit_env.sh script created a .profile file in the home directory despite an existing .bash_profile. The nit command won't work since the shell will read the .bash_profile instead of the profile.
This PR checks for the existance and writes to a .profile or .bashrc or .bash_profile, if it finds none of those it creates a .profile and writes to that one.
If the PR is accepted the documentation will have to be updated.
Nice Christmas present. +1
jenkins: ok to test
jeninns: add to whitelist
Please amend you commit to add the signed-off by: cf. http://gresil.org/jenkins/job/CI_github/3959/testReport/junit/cmd/check/signed_off_by/ and http://nitlanguage.org/internal/patches.html
branch is merged.
| gharchive/pull-request | 2015-12-25T19:01:10 | 2025-04-01T06:45:09.177135 | {
"authors": [
"itsWill",
"privat"
],
"repo": "nitlang/nit",
"url": "https://github.com/nitlang/nit/pull/1913",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2069382102 | chore: update wake and switch to nixpkgs 23.11
I did:
update wake to the latest version
switch to the latest stable nixpkgs version (poetry2nix doesn't work with 23.05 https://github.com/nix-community/poetry2nix/issues/1422)
I also need to instantiate poetry2nix.mkPoetryApplication, but I can't. I hope it close #421
But like that it works fine: https://github.com/nix-community/poetry2nix/blob/master/templates/app/flake.nix#L18
@selfuryon I have a competing unfinished commit to update to nixpkgs-23.11. Here is what I found:
lighthouse doesn't build on 23.11 because Rust is too new. That's because of lighthouse depending on an old version of libmdbx-rs in its slasher Cargo.tomb. I "solved" this by introducing a nixpkgs_2305 input and just having lighthouse stay behind. * ssvnode: My fix is exactly like yours.
I haven't fixed Nimbus yet in my commit.
Just FYI.
@selfuryon I have a competing unfinished commit to update to nixpkgs-23.11. Here is what I found:
* lighthouse doesn't build on 23.11 because Rust is too new. Lighthouse depends on an old version of libmdbx-rs in its slasher Cargo.tomb, and that old libmdbx-rs in turn points to an old bindgen library. The old bindgen dependency chokes on an LLVM dependency update like [this](https://github.com/rust-lang/rust-bindgen/issues/2312). bindgen would need to be updated inside sipg's fork of libmdbx-rs. But I am not going to touch that to be frank. I "solved" this by introducing a nixpkgs_2305 input and just having lighthouse stay behind.
* ssvnode: My fix is exactly like yours. Just my override is inside packages/default.nix.
* I haven't fixed Nimbus yet in my commit.
Just FYI.
For the first point seems related this one: https://github.com/sigp/lighthouse/issues/4280. Maybe we can just disable slasher feature for now to mitigate that. I will try
Good, switch to slasher-lmdb (it's enabled by default) helps
The only problem that poetry2nix.mkPoetryApplication still is the derivation, idk how to fix that
The only problem that poetry2nix.mkPoetryApplication still is the derivation, idk how to fix that
My approach was to also add the poetry2nix overlay to pkgs
pkgs = lib.extras.nix.mkNixpkgs {
inherit system;
inherit (inputs) nixpkgs;
+ overlays = [inputs.poetry2nix.overlays.default];
};
This made my nix flake check pass.
The only problem that poetry2nix.mkPoetryApplication still is the derivation, idk how to fix that
My approach was to also add the poetry2nix overlay to pkgs
pkgs = lib.extras.nix.mkNixpkgs {
inherit system;
inherit (inputs) nixpkgs;
+ overlays = [inputs.poetry2nix.overlays.default];
};
This made my nix flake check pass.
Is nix flake show also works fine without --allow-import-from-derivation?
Is nix flake show also works fine without --allow-import-from-derivation?
No, but nix flake show doesn't work either on the current main branch.
Is nix flake show also works fine without --allow-import-from-derivation?
No, but nix flake show doesn't work either on the current main branch.
Yeah, but with flake-utils with the same style it works, I just wanted to fix that, but maybe it would be easy to fix a script for nix-update.
@brianmcgee @aldoborrero can you pls check this PR and approve it?
| gharchive/pull-request | 2024-01-08T00:41:14 | 2025-04-01T06:45:09.193326 | {
"authors": [
"catwith1hat",
"selfuryon"
],
"repo": "nix-community/ethereum.nix",
"url": "https://github.com/nix-community/ethereum.nix/pull/427",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
812861443 | fix(build_loop.rs): don't build a project twice if there are several inotify notifications
Reopening of https://github.com/target/lorri/pull/465
Checklist
[ ] Updated the documentation (code documentation, command help, ...)
[ ] Tested the change (unit or integration tests)
[ ] Amended the changelog in release.nix (see release.nix for instructions)
Superseded by #7
| gharchive/pull-request | 2021-02-21T13:56:17 | 2025-04-01T06:45:09.195930 | {
"authors": [
"symphorien"
],
"repo": "nix-community/lorri",
"url": "https://github.com/nix-community/lorri/pull/3",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
140655333 | Добавить поддержку списоков метрик, группировок и сортировок для формирования Отчетов Metrica
Добавить возможность передавать массивы параметров для metrics, dimensions и sort.
Актуально для:
https://tech.yandex.ru/metrika/doc/api2/api_v1/bytime-docpage/
https://tech.yandex.ru/metrika/doc/api2/api_v1/data-docpage/
Основано на комментарии https://github.com/nixsolutions/yandex-php-library/pull/144#issuecomment-196050650
Уже добавлен функционал в https://github.com/nixsolutions/yandex-php-library/pull/153
| gharchive/issue | 2016-03-14T12:07:35 | 2025-04-01T06:45:09.200111 | {
"authors": [
"naxel",
"xgamtx"
],
"repo": "nixsolutions/yandex-php-library",
"url": "https://github.com/nixsolutions/yandex-php-library/issues/145",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
98584891 | postKeyboardInfo won’t get called when switch from emoji keyboard to English keyboard on iOS 9
However, English keyboard is slightly shorter than emoji keyboard. Not sure if it’s an iOS 9 bug. I’m on iPhone 5s, iOS 9 beta 4.
In my test, English keyboard has the same height as Emoji keyboard (iPhone 5, iOS 9 beta 4).
But it’s a bug when change from Emoji Keyboard to Chinese Handwrite Keyboard will not postKeyboardInfo. Fixed in 0.3.1.
Thanks!
Awesome, thank you!
| gharchive/issue | 2015-08-02T05:21:40 | 2025-04-01T06:45:09.201953 | {
"authors": [
"nixzhu",
"xhacker"
],
"repo": "nixzhu/KeyboardMan",
"url": "https://github.com/nixzhu/KeyboardMan/issues/2",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
232457288 | Migration of Shared Steps
FieldMap need information of stores to migrate Shared Steps in Test Cases.
To migrate a Shared Step, the Shared Step need to be migrated first -> ORDER BY of selected WI are now sorted by (WI name, ChangeDate desc).
When developing in DEBUG mode it does not make sence to warn of newer version.
Stacktrace is written when exceptions occur
It looks like this now has multiple sets of changes. Please make each pullrequest for a single purpose so that they can be vetted and merged independently. Also minimise all affected files to only those which relevant changes.
I still see a bunch of conflicts that need resolved...
| gharchive/pull-request | 2017-05-31T05:27:50 | 2025-04-01T06:45:09.213346 | {
"authors": [
"MrHinsh",
"visma-theothustrup"
],
"repo": "nkdAgility/vsts-sync-migration",
"url": "https://github.com/nkdAgility/vsts-sync-migration/pull/23",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
365116107 | Esp32 restarts when btStop() function is called.
I am facing problem in shutting down bluetooth. When I use btStop() function in example code of BLE_uart for BLE library my esp32 restarts after few seconds. Below is the code i am using:
#include <BLEDevice.h>
#include <BLEServer.h>
#include <BLEUtils.h>
#include <BLE2902.h>
BLECharacteristic *pCharacteristic;
BLEServer *pServer;
bool deviceConnected = false;
uint8_t txValue = 0;
int counter=0;
// See the following for generating UUIDs:
// https://www.uuidgenerator.net/
#define SERVICE_UUID "6E400001-B5A3-F393-E0A9-E50E24DCCA9E" // UART service UUID
#define CHARACTERISTIC_UUID_RX "6E400002-B5A3-F393-E0A9-E50E24DCCA9E"
#define CHARACTERISTIC_UUID_TX "6E400003-B5A3-F393-E0A9-E50E24DCCA9E"
class MyServerCallbacks: public BLEServerCallbacks {
void onConnect(BLEServer* pServer) {
deviceConnected = true;
};
void onDisconnect(BLEServer* pServer) {
deviceConnected = false;
}
};
class MyCallbacks: public BLECharacteristicCallbacks {
void onWrite(BLECharacteristic *pCharacteristic) {
std::string rxValue = pCharacteristic->getValue();
if (rxValue.length() > 0) {
Serial.println("*********");
Serial.print("Received Value: ");
for (int i = 0; i < rxValue.length(); i++)
Serial.print(rxValue[i]);
Serial.println();
Serial.println("*********");
}
}
};
void setup() {
Serial.begin(115200);
// Create the BLE Device
BLEDevice::init("UART Service");
// Create the BLE Server
pServer = BLEDevice::createServer();
pServer->setCallbacks(new MyServerCallbacks());
// Create the BLE Service
BLEService *pService = pServer->createService(SERVICE_UUID);
// Create a BLE Characteristic
pCharacteristic = pService->createCharacteristic(
CHARACTERISTIC_UUID_TX,
BLECharacteristic::PROPERTY_NOTIFY
);
pCharacteristic->addDescriptor(new BLE2902());
BLECharacteristic *pCharacteristic = pService->createCharacteristic(
CHARACTERISTIC_UUID_RX,
BLECharacteristic::PROPERTY_WRITE
);
pCharacteristic->setCallbacks(new MyCallbacks());
// Start the service
pService->start();
// Start advertising
pServer->getAdvertising()->start();
Serial.println("Waiting a client connection to notify...");
}
void loop() {
while(counter < 5)
{
if (deviceConnected) {
Serial.printf("*** Sent Value: %d ***\n", txValue);
pCharacteristic->setValue(&txValue, 1);
pCharacteristic->notify();
txValue++;
counter++;
}
delay(1000);
}
pServer->stopAdvertising();
if(btStop())
{
Serial.println("Bluetooth turned off Successfully.");
}
while(1);
}
I also get following error on Serial monitor before restart:
Waiting a client connection to notify...
*** Sent Value: 0 ***
*** Sent Value: 1 ***
*** Sent Value: 2 ***
*** Sent Value: 3 ***
*** Sent Value: 4 ***
Bluetooth turned off Successfully.
Guru Meditation Error: Core 0 panic'ed (LoadProhibited)
. Exception was unhandled.
Register dump:
PC : 0x4004baaa PS : 0x00060031 A0 : 0x80085d2d A1 : 0x3ffc0560
A2 : 0x00000000 A3 : 0x00000000 A4 : 0x3ffc5b04 A5 : 0x00000000
A6 : 0x00000001 A7 : 0x00000001 A8 : 0x00000000 A9 : 0x3ffb00d8
A10 : 0x00000000 A11 : 0x3ffd4fe0 A12 : 0x8008780c A13 : 0x3ffd9db0
A14 : 0x00000000 A15 : 0x3ffc3670 SAR : 0x00000018 EXCCAUSE: 0x0000001c
EXCVADDR: 0x00000048 LBEG : 0x00000000 LEND : 0x00000000 LCOUNT : 0x00000000
Backtrace: 0x4004baaa:0x3ffc0560 0x40085d2a:0x3ffc0580 0x400156a5:0x3ffc05b0 0x400552cd:0x3ffc05d0 0x40086db7:0x3ffc05f0 0x4008166d:0x3ffc0610 0x40138317:0x00000000
Rebooting...
ets Jun 8 2016 00:22:57
rst:0xc (SW_CPU_RESET),boot:0x13 (SPI_FAST_FLASH_BOOT)
configsip: 0, SPIWP:0xee
clk_drv:0x00,q_drv:0x00,d_drv:0x00,cs0_drv:0x00,hd_drv:0x00,wp_drv:0x00
mode:DIO, clock div:1
load:0x3fff0018,len:4
load:0x3fff001c,len:956
load:0x40078000,len:0
load:0x40078000,len:13076
entry 0x40078a58
Waiting a client connection to notify...
The code I have mentioned above is just to know how to stop bluetooth when its not in use. I want to use this function in certain projects I will be working on.
Try this code instead of btStop (you wont be able to use BT anymore until you restart esp32):
esp_bluedroid_disable();
esp_bluedroid_deinit();
esp_bt_controller_disable();
esp_bt_controller_deinit();
At the end you can try to add this to release heap:
esp_bt_mem_release(ESP_BT_MODE_BTDM);
@S-March To be honest i dont know, never been trying it. But i have also good news. Until you wont release memory you can deinit and init bt
https://github.com/nkolban/esp32-snippets/issues/630#issuecomment-427635732
| gharchive/issue | 2018-09-29T09:33:21 | 2025-04-01T06:45:09.241306 | {
"authors": [
"chegewara",
"mimansamaheshwari"
],
"repo": "nkolban/esp32-snippets",
"url": "https://github.com/nkolban/esp32-snippets/issues/661",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
2142189879 | Infinite vite.config.ts reload loop
Version 4.0 of vanilla extract vite plugin when used along with solid-start results in infinite change detection loop on vite.config.ts. Seems to be happening due to temporary timestamp vite files.
Log
vinxi 0.3.3
vinxi Found vite.config.js with app config
vinxi starting dev server
The CJS build of Vite's Node API is deprecated. See https://vitejs.dev/guide/troubleshooting.html#vite-cjs-node-api-deprecated for more details.
vinxi change detected 1 in vite.config.ts.timestamp-1708343215792-222a922c5f9cd.mjs
vinxi reloading app
vinxi Found vite.config.js with app config
vinxi change detected 1 in vite.config.ts.timestamp-1708343215793-bd841f136bbf3.mjs
vinxi reloading app
vinxi Found vite.config.js with app config
vinxi change detected 1 in vite.config.ts.timestamp-1708343215796-13daa118e8c7b.mjs
vinxi reloading app
vinxi Found vite.config.js with app config
vinxi change detected 1 in vite.config.ts.timestamp-1708343218501-0dea49476940c.mjs
vinxi reloading app
vinxi Found vite.config.js with app config
vinxi change detected 1 in vite.config.ts.timestamp-1708343218502-28d8dd889ec74.mjs
vinxi reloading app
Possible fix
Adding ignore on timestamp for vite config watcher in packages\vinxi\bin\cli.mjs seems to fix issue:
watcher = chokidar.watch(
["app.config.*", "vite.config.*", configFile].filter(Boolean),
{
ignoreInitial: true,
ignored: "**timestamp**"
},
);
yeah that fix makes sense, we shouldn't listen to the temporary files vite creates with the timestamps.
Want to make a PR for this?
Sure, should ignore pattern be more strict or is it fine to have it same as in snippet I provided in comment above?
PR is up https://github.com/nksaraf/vinxi/pull/203
Fix is merged, and will be part of next release
| gharchive/issue | 2024-02-19T11:50:37 | 2025-04-01T06:45:09.245761 | {
"authors": [
"Nvos",
"nksaraf"
],
"repo": "nksaraf/vinxi",
"url": "https://github.com/nksaraf/vinxi/issues/201",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2252823480 | 442 arceringen in sql, 5 vervallen, maar 435 in query-output van publicatie. Waar zijn de laatste 2?
@gerritversteegh uit de query van arceringen krijg ik er 435; er zijn er 442 in de sql (hoogste id) waarvan 5 vervallen, mis ik er dan 2 of missen er twee id-nummers?
@ElisabethKloren Er blijken inderdaad 2 IDs te ontbreken in de SQL database; namelijk de IDs 275 en 276. Deze zijn niet gedefinieerd en dus is de publicatie wel compleet. Ook de import is compleet, omdat de vervallen IDs wel zijn geimporteerd (zie screenshot).
Topper, dank je wel!
| gharchive/issue | 2024-04-19T12:13:03 | 2025-04-01T06:45:09.247670 | {
"authors": [
"ElisabethKloren",
"gerritversteegh"
],
"repo": "nl-digigo/NLCS",
"url": "https://github.com/nl-digigo/NLCS/issues/394",
"license": "CC-BY-4.0",
"license_type": "permissive",
"license_source": "github-api"
} |
166979171 | DSCS-345 puts the public access condition check back into the rules f…
…or determining whether a work can be used as a representative image as it made the image available in tarkine
Original pull request: https://github.com/nla/amberdb/pull/565
@scoen
👍
| gharchive/pull-request | 2016-07-22T06:33:43 | 2025-04-01T06:45:09.250721 | {
"authors": [
"m-r-c",
"scoen"
],
"repo": "nla/amberdb",
"url": "https://github.com/nla/amberdb/pull/569",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
1381374887 | feat: adds support for overriding path owner
Fixes #44
This PR adds four new fields: Uid, Gid, Uname, and Gname. They allow modifying their respective ownership permissions on paths. The default behavior of keeping non-matched files as "0/0/root/root" is maintained for backward compatibility. The only other change is that specifying mode is now optional; this is to allow changing ownership while still keeping the default umask.
This PR includes the permission fix from #46. Feel free to close that one and merge this if it makes things cleaner.
All tests are passing, but I also tested with the following flake:
{
inputs.nix2container.url = "github:jmgilman/nix2container/change-owner";
inputs.nixpkgs.follows = "nix2container/nixpkgs";
outputs = { self, nixpkgs, nix2container }:
let
pkgs = import nixpkgs { system = "aarch64-linux"; };
n2c = nix2container.packages.aarch64-linux.nix2container;
entrypoint = pkgs.writeShellApplication {
name = "entrypoint";
text = ''
echo "Hello, world!"
'';
};
test = pkgs.runCommand "test" { } ''
mkdir -p $out/tmp
touch $out/tmp/test1.txt
touch $out/tmp/test2.txt
'';
in
{
packages.aarch64-linux.hello = n2c.buildImage {
maxLayers = 100;
name = "hello";
config = {
entrypoint = [ "${entrypoint}/bin/entrypoint" ];
};
copyToRoot = [ test pkgs.bash pkgs.coreutils ];
perms = [
{
path = test;
regex = "/tmp/test1.txt";
uid = 1001;
gid = 1001;
}
];
};
};
}
Inside the container:
bash-5.1# ls -la /tmp
total 8
dr-xr-xr-x 2 0 0 4096 Jan 1 1970 .
drwxr-xr-x 1 0 0 4096 Sep 21 19:09 ..
-r--r--r-- 1 1001 1001 0 Jan 1 1970 test1.txt
-r--r--r-- 1 0 0 0 Jan 1 1970 test2.txt
Yeah, nice!
I think we could first merge the MR #46. Then it would be nice to add a test for this ownership feature.
(I could add the test for you: the test framework is pretty fragile...)
Sure, I was going to add a test, just hadn't peeked my head in to see what I'm working with. Are you wanting to add a Go test or perhaps another example that gets included in the Nix tests? Or both?
@jmgilman That's up to you! (But I think adding a Nix test would be a bit easier.)
Ok, I've added a nix-based test that's similar to the perms test but just asserts that file ownership has changed as expected. Are we good to merge now?
@jmgilman yep, lgtm! Could you just rebase onto master?
I don't know about your usecase, but i think it could also be useful to also add a uid/gid/uname/gname attribute on the buildLayer and buildImage functions. These attributes would allow to specify the user/group for the whole layer.
Using the perms struct to set the user on the whole layer is a bit annoying i think because you would need to explicitly set the perms on all store paths composing your layer. Note the perms struct is still useful to set the user/group on some specific files without having to create a dedicated layer.
That's a great idea! I'll see about tackling that in a future PR :) Everything should be up to date now.
@jmgilman thank you!
| gharchive/pull-request | 2022-09-21T19:07:11 | 2025-04-01T06:45:09.263183 | {
"authors": [
"jmgilman",
"nlewo"
],
"repo": "nlewo/nix2container",
"url": "https://github.com/nlewo/nix2container/pull/47",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
463724029 | Add support for app_home_opened event
EventsAPI Error parsing inner event: app_home_opened, Inner Event does not exist! app_home_opened
merged support for this today.
| gharchive/issue | 2019-07-03T12:27:51 | 2025-04-01T06:45:09.267375 | {
"authors": [
"artempanko",
"james-lawrence"
],
"repo": "nlopes/slack",
"url": "https://github.com/nlopes/slack/issues/548",
"license": "bsd-2-clause",
"license_type": "permissive",
"license_source": "bigquery"
} |
1011712762 | emojis are not tokenized very well
Maybe add something like this as a pre or post processing step?
It might make sense to download the emoji list and store it as part of the build, so people do not need to load the emojis module, ...
import emoji
from emoji import unicode_codes
import re
EMOJI_UNICODE = unicode_codes.EMOJI_UNICODE['en']
emojis = sorted(EMOJI_UNICODE.values(), key=len, reverse=True)
print (emojis, sep='\n')
emoji_regexp = f"({'|'.join(re.escape(u) for u in emojis)})"
EMOJI_XL = re.compile(rf"\B({emoji_regexp})", flags=re.UNICODE)
EMOJI_XR = re.compile(rf"({emoji_regexp})\B", flags=re.UNICODE)
EMOJI_WL = re.compile(rf"(\w)({emoji_regexp})", flags=re.UNICODE)
EMOJI_WR = re.compile(rf"({emoji_regexp})(\w)", flags=re.UNICODE)
EMOJI_REGEX = re.compile(rf"({emoji_regexp})", flags=re.UNICODE)
def split_emoji(text):
text = EMOJI_REGEX.sub(r' \1 ', text)
text = text.replace(' ', ' ')
return text
test = "🤔 🙈 me así, se😌 ds 💕👭👙 hello 👩🏾🎓 emoji hello 👨👩👦👦 how are 😊 you today🙅🏽🙅🏽"
#test = "They are going to start a direct flight soon😠"
print(test)
print(split_emoji(test))
I'm in favor of this, as long as it doesn't make the tokenization considerably slower. I don't think it would have to.
Perhaps a good place for it would be to include it in the TweetTokenizer? The regexps for those are defined here.
I would also prefer not having additional dependencies for this, but perhaps the dependency allows us to automatically update to newer emoji lists. After all, they're up to v14.0 now, and will continue updating.
| gharchive/issue | 2021-09-30T05:09:30 | 2025-04-01T06:45:09.271769 | {
"authors": [
"fcbond",
"tomaarsen"
],
"repo": "nltk/nltk",
"url": "https://github.com/nltk/nltk/issues/2829",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
111501770 | Fix CI issue with Stanford CoreNLP jar
Makes Jenkins download both Stanford jars, and skips the Stanford parser doctests if either of the jars is missing.
Fixes the CI issue that cropped up after merging #1163.
Great, thanks @futurulus
| gharchive/pull-request | 2015-10-14T21:56:13 | 2025-04-01T06:45:09.272956 | {
"authors": [
"futurulus",
"stevenbird"
],
"repo": "nltk/nltk",
"url": "https://github.com/nltk/nltk/pull/1174",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
1678145994 | Set reachable depth for generate
Fix #3072: when a grammar is recursive, the default generation depth (sys.maxsize) is wildly out of reach.
import sys
print(sys.maxsize)
9223372036854775807
print(sys.getrecursionlimit())
1000
So this PR adds a test to check if the grammar is recursive, in which case the default "depth" is lowered to a safe value.
Because of indirect recursion between generate_all() and generate_one(), the depth cannot exceed one third of the recursion limit. Additionally, Python 3 has the undocumented peculiarity that the actually reachable recursion limit is 3 less than sys.getrecursionlimit().
With this PR, generating from the grammar in #3072 no longer raises any error:
from nltk.grammar import CFG
from nltk.parse.generate import generate
G = CFG.fromstring("""
S -> 'a' S |
""")
gen = generate(G)
out = next(gen)
print(len(out))
329
print(''.join(out))
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
Here's a small experiment to verify the unintuitive behaviour of the recursion limit in Python 3:
from sys import setrecursionlimit
setrecursionlimit(10)
def recurse(n):
print(n)
recurse(n+1)
recurse(1)
1
2
3
4
5
6
7
After printing 7, Python 3 raises a RecursionError. But with Python 2, the same code is able to reach 9.
I suggest dropping the recursion check, and setting the parameter regardless of whether the grammar is recursive
Strangely, CI failed only with Python 3.11 on macos-latest.
| gharchive/pull-request | 2023-04-21T08:51:17 | 2025-04-01T06:45:09.277723 | {
"authors": [
"ekaf",
"stevenbird"
],
"repo": "nltk/nltk",
"url": "https://github.com/nltk/nltk/pull/3145",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
2136442603 | Warning: Failed prop type: Invalid prop binary of type string supplied to BarcodeChunk, expected an array.
Good day, friends. I'm using the example code for using the library from the documentation
import React from 'react';
import { View } from 'react-native';
import { Barcode } from 'expo-barcode-generator';
export default function App() {
return (
<View style={{ flex: 1, justifyContent: 'center', alignItems: 'center' }}>
<Barcode
value="123456789999"
options={{ format: 'UPC', background: 'lightblue' }}
rotation={-5}
/>
</View>
);
}
and encountering the following error:
ERROR Warning: Failed prop type: Invalid prop `binary` of type `string` supplied to `BarcodeChunk`, expected an array.
at BarcodeChunk (http://192.168.3.221:8081/node_modules/expo-router/entry.bundle//&platform=ios&dev=true&hot=false&lazy=true&transform.engine=hermes&transform.routerRoot=app:214555:22)
in Barcode (at ScreenHomeAcc.jsx:38)
in RCTView (at View.js:116)
in View (at ScreenHomeAcc.jsx:33)
in RCTView (at View.js:116)
in View (at ScreenHomeAcc.jsx:18)
in RCTView (at View.js:116)
in View (at ScreenHomeAcc.jsx:12)
in ScreenHomeAcc (at profile.js:6)
in Profile (at useScreens.js:112)
in Unknown (at useScreens.js:116)
in Suspense (at useScreens.js:115)
in Route (at useScreens.js:131)
in Route(profile) (at SceneView.tsx:132)
in StaticContainer
in EnsureSingleNavigator (at SceneView.tsx:124)
in SceneView (at useDescriptors.tsx:218)
in RCTView (at View.js:116)
in View (at Screen.tsx:63)
in RCTView (at View.js:116)
in View (at Background.tsx:13)
in Background (at Screen.tsx:58)
in Screen (at BottomTabView.tsx:135)
in RNSScreen (at createAnimatedComponent.js:54)
in Unknown (at src/index.native.tsx:314)
in Suspender (at src/index.tsx:40)
in Suspense (at src/index.tsx:39)
in Freeze (at src/index.native.tsx:206)
in DelayedFreeze (at src/index.native.tsx:313)
in InnerScreen (at src/index.native.tsx:566)
in Screen (at ScreenFallback.tsx:39)
in MaybeScreen (at BottomTabView.tsx:127)
in RNSScreenNavigationContainer (at src/index.native.tsx:398)
in ScreenContainer (at ScreenFallback.tsx:30)
in MaybeScreenContainer (at BottomTabView.tsx:93)
in RCTView (at View.js:116)
in View (at SafeAreaProviderCompat.tsx:42)
in SafeAreaProviderCompat (at BottomTabView.tsx:92)
in BottomTabView (at createBottomTabNavigator.tsx:118)
in PreventRemoveProvider (at useNavigationBuilder.tsx:718)
in NavigationContent (at useComponent.tsx:35)
in Unknown (at createBottomTabNavigator.tsx:117)
in BottomTabNavigator (at withLayoutContext.js:65)
in Unknown (at _layout.js:14)
in Bro (at useScreens.js:112)
in Unknown (at useScreens.js:116)
in Suspense (at useScreens.js:115)
in Route (at useScreens.js:131)
in Route(user) (at SceneView.tsx:132)
in StaticContainer
in EnsureSingleNavigator (at SceneView.tsx:124)
in SceneView (at useDescriptors.tsx:218)
in RCTView (at View.js:116)
in View (at DebugContainer.native.tsx:34)
in DebugContainer (at NativeStackView.native.tsx:82)
in MaybeNestedStack (at NativeStackView.native.tsx:325)
in RCTView (at View.js:116)
in View (at NativeStackView.native.tsx:318)
in RNSScreen (at createAnimatedComponent.js:54)
in Unknown (at src/index.native.tsx:314)
in Suspender (at src/index.tsx:40)
in Suspense (at src/index.tsx:39)
in Freeze (at src/index.native.tsx:206)
in DelayedFreeze (at src/index.native.tsx:313)
in InnerScreen (at src/index.native.tsx:566)
in Screen (at NativeStackView.native.tsx:253)
in SceneView (at NativeStackView.native.tsx:413)
in Suspender (at src/index.tsx:40)
in Suspense (at src/index.tsx:39)
in Freeze (at src/index.native.tsx:206)
in DelayedFreeze (at src/index.native.tsx:220)
in RNSScreenStack (at src/index.native.tsx:227)
in ScreenStack (at NativeStackView.native.tsx:401)
in NativeStackViewInner (at NativeStackView.native.tsx:474)
in RCTView (at View.js:116)
in View (at SafeAreaProviderCompat.tsx:42)
in SafeAreaProviderCompat (at NativeStackView.native.tsx:473)
in NativeStackView (at createNativeStackNavigator.tsx:72)
in PreventRemoveProvider (at useNavigationBuilder.tsx:718)
in NavigationContent (at useComponent.tsx:35)
in Unknown (at createNativeStackNavigator.tsx:71)
in NativeStackNavigator (at withLayoutContext.js:65)
in Unknown (at _layout.js:11)
in Layout (at useScreens.js:112)
in Unknown (at useScreens.js:116)
in Suspense (at useScreens.js:115)
in Route (at useScreens.js:131)
in Route() (at ExpoRoot.js:90)
in RNCSafeAreaProvider (at SafeAreaContext.tsx:92)
in SafeAreaProvider (at ExpoRoot.js:55)
in wrapper (at ExpoRoot.js:89)
in EnsureSingleNavigator (at BaseNavigationContainer.tsx:430)
in BaseNavigationContainer (at NavigationContainer.native.js:105)
in ThemeProvider (at NavigationContainer.native.js:104)
in NavigationContainerInner (at ExpoRoot.js:86)
in ContextNavigator (at ExpoRoot.js:64)
in ExpoRoot (at qualified-entry.js:20)
in App (created by ErrorOverlay)
in ErrorToastContainer (created by ErrorOverlay)
in ErrorOverlay (at withDevTools.ios.js:25)
in withDevTools(ErrorOverlay) (at renderApplication.js:57)
in RCTView (at View.js:116)
in View (at AppContainer.js:127)
in RCTView (at View.js:116)
in View (at AppContainer.js:155)
in AppContainer (at renderApplication.js:50)
in main(RootComponent) (at renderApplication.js:67)
I use
"dependencies": {
"expo": "~50.0.5",
"expo-barcode-generator": "^2.0.0"
}
Tell me how to fix this?
@nmamali Please tell me, should I wait for an extraordinary bugfix or solution from you?
could you please create a minimal-reproducible-example (https://stackoverflow.com/help/minimal-reproducible-example)
@CoderButerbroder if you change that line to the following, does it still work?
BarcodeChunk.propTypes = {
padding: PropTypes.number,
binary: PropTypes.oneOfType([PropTypes.arrayOf(PropTypes.string), PropTypes.string]),
options: PropTypes.shape({
textPosition: PropTypes.oneOf(['top', 'bottom']),
fontSize: PropTypes.number,
textMargin: PropTypes.number,
width: PropTypes.number,
height: PropTypes.number
})
};
I'll release a new version with that fix shortly. Thanks for testing it 💪
@CoderButerbroder I've just released 2.0.1. Please let me know if it fixes your issue.
| gharchive/issue | 2024-02-15T12:49:53 | 2025-04-01T06:45:09.283471 | {
"authors": [
"CoderButerbroder",
"JPStrydom",
"nmamali"
],
"repo": "nmamali/expo-barcode-generator",
"url": "https://github.com/nmamali/expo-barcode-generator/issues/5",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
293312882 | CSRF
Investigate the topic.
@heysailor thanks but the link seems to be broken
I think his link was http://www.redotheweb.com/2015/11/09/api-security.html
| gharchive/issue | 2018-01-31T21:11:24 | 2025-04-01T06:45:09.285428 | {
"authors": [
"duiker101",
"nmaro"
],
"repo": "nmaro/ooth",
"url": "https://github.com/nmaro/ooth/issues/33",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
120176299 | PullToRefreshListView: Upside down mode of pulling
Is it possible to support 'upside down pull to refresh' ?
'Upside down' means that if the scroll is reached to the bottom end then pulling up to refresh.
That should be possible. I'll investigate and report.
| gharchive/issue | 2015-12-03T13:55:19 | 2025-04-01T06:45:09.289485 | {
"authors": [
"nmetulev",
"pnp0a03"
],
"repo": "nmetulev/comet",
"url": "https://github.com/nmetulev/comet/issues/4",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
2515338436 | Champions Cohorts: 2024-nmfs. Oct 8 - Dec 4
NMFS Openscapes Champions Cohorts are managed in the NMFS-Openscapes / CohortPlanning project.
This issue is open to indicate a Cohort is in progress
Cohort completed! Issues are open for wrap up and blog post
| gharchive/issue | 2024-09-10T03:54:48 | 2025-04-01T06:45:09.290805 | {
"authors": [
"stefaniebutland"
],
"repo": "nmfs-openscapes/how-we-work",
"url": "https://github.com/nmfs-openscapes/how-we-work/issues/1",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
586252168 | csr.periph: add Peripheral base class.
This PR aims to add support for CSR-capable peripherals.
Related issue: #10
A new base class csr.Peripheral can be subclassed to provide nmigen-soc peripherals with helpers
for managing CSR registers and sending interrupt requests to a CPU. Support for interrupts is optional.
The plumbing (multiplexing the registers, managing events, requesting interrupts) between the peripheral and the outside world is done by the PeripheralBridge, which is generated for the user by calling self.csr_bridge(). It exposes a csr.Interface and optionally an IRQ line. The bridge is an Elaboratable, so the subclass must add it as a submodule during elaborate().
A few general quetions:
Is a design based on inheritance the best we can do here? The main problem here is that mixing together nmigen-soc code and user code places constraints on evolution of each. If we ever add new fields (even private) their names can clash with user-defined attributes and will be silently overwritten. This is a major backwards compatibility hazard, and this is why I got rid of Migen's Module in favor of nMigen's Elaboratable (which only ever touches private fields). I think a design based on composition would work better in long term, but I do not right now have a specific proposal.
Should the events be tightly coupled to peripherals, or would it make more sense to introduce them as a separate mechanism that then becomes a part of the peripheral code? I can think of a few uses for IRQ-like, prioritized events in SoC code that do not directly correspond to peripherals. For example, some USB device cores offer many dozens of events that are then composed into a single IRQ line the USB device peripheral provides to the host.
I think a design based on composition would work better in long term, but I do not right now have a specific proposal.
I went with inheritance by lack of knowledge of a better way.
By composition, do you mean something like this ?
class ExamplePeripheral(Elaboratable):
def __init__(self):
self._bridge = csr.PeripheralBridge(data_width=8, alignment=0)
self._data = self._bridge.csr(8, "w")
self._rdy = self._bridge.event(mode="rise")
self.csr_bus = self._bridge.bus
self.irq = self._bridge.irq
def elaborate(self, platform):
m = Module()
m.submodules.bridge = self._bridge
# ...
return m
Besides relying on naming conventions such as csr_bus or irq, this should keep the boilerplate low while avoiding inheritance.
Should the events be tightly coupled to peripherals, or would it make more sense to introduce them as a separate mechanism that then becomes a part of the peripheral code?
I can think of a few uses for IRQ-like, prioritized events in SoC code that do not directly correspond to peripherals. For example, some USB device cores offer many dozens of events that are then composed into a single IRQ line the USB device peripheral provides to the host.
Yes, I can decouple the event management logic from the peripheral, and reuse it inside a csr.PeripheralBridge afterwards.
Hm, I'm sympathetic to both views re: inheritance. The composition example is somewhat unsatisfying because it's purely based on convention, which seems like a recipe for a lot of not-quite-compatible variants. What about using something analogous to FIFOInterface here? If not, I'd at least prefer to expose self.bridge instead of breaking out specific fields.
Yes, I can decouple the event management logic from the peripheral, and reuse it inside a csr.PeripheralBridge afterwards.
Let's do that first since it's a small self-contained addition, and I can think more about the design for peripherals in the meantime.
By composition, do you mean something like this ?
Yes, something along these lines. In this case, only csr_bus and irq become part of the "peripheral interface", which means that we do not risk breaking existing code if we change implementation details of nmigen-soc.
I have one more proposal here. The PeripheralBridge class that you suggest here is just an implementation detail of every peripheral, and that is a good thing, since it means peripherals have complete implementation freedom. However, from the logical point of view of the code that uses the peripheral, all resources of a peripheral--CSRs, memories, IRQs, configuration constants, etc--are a part of a single logical group. We currently do not have anything like this.
First, consider this grouping from the perspective of firmware. For the firmware running on any specific core, there is a unified view of the available peripherals that the board support package generator uses, consisting of:
static configuration data (generating constants),
control/status registers (generating accessor functions),
memory windows (generating named address ranges),
event numbers (generating interrupt handlers and interrupt number constants).
I think you've seen part of my plan (corresponding to items 2 and 3) for the BSP generator in the MemoryMap class. To recap, if you have a list of resources somewhere, then using a root MemoryMap of a particular core, you can use find_resource to determine where in the address space of that core the resources are located.
Second, the hardware. For the hardware the perspective is actually rather different because the memory is hierarchical, but the events are not; beyond that, the memory topology is more complex than a straightforward range tree.
I think what would make sense here is having metadata classes per logical peripheral that collect together static data, CSRs, memories, and events, and which are incrementally grouped together through the entire interconnect hierarchy. The CPU gateware would (I think) only really use the events from this information, but the BSP generator would use all of it.
(I believe this is a more fleshed out version of @awygle's proposal here, who commented before I finished writing this.)
| gharchive/pull-request | 2020-03-23T14:28:40 | 2025-04-01T06:45:09.302101 | {
"authors": [
"awygle",
"jfng",
"whitequark"
],
"repo": "nmigen/nmigen-soc",
"url": "https://github.com/nmigen/nmigen-soc/pull/11",
"license": "BSD-2-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
379696726 | The automated release is failing 🚨
:rotating_light: The automated release from the master branch failed. :rotating_light:
I recommend you give this issue a high priority, so other packages depending on you could benefit from your bug fixes and new features.
You can find below the list of errors reported by semantic-release. Each one of them has to be resolved in order to automatically publish your package. I’m sure you can resolve this 💪.
Errors are usually caused by a misconfiguration or an authentication problem. With each error reported below you will find explanation and guidance to help you to resolve it.
Once all the errors are resolved, semantic-release will release your package the next time you push a commit the master branch. You can also manually restart the failed CI job that runs semantic-release.
If you are not sure how to resolve this, here is some links that can help you:
Usage documentation
Frequently Asked Questions
Support channels
If those don’t help, or if this issue is reporting something you think isn’t right, you can always ask the humans behind semantic-release.
Invalid npm token.
The npm token configured in the NPM_TOKEN environment variable must be a valid token allowing to publish to the registry https://registry.npmjs.org/.
If you are using Two-Factor Authentication, make configure the auth-only level is supported. semantic-release cannot publish with the default auth-and-writes level.
Please make sure to set the NPM_TOKEN environment variable in your CI with the exact value of the npm token.
Good luck with your project ✨
Your semantic-release bot :package::rocket:
:rotating_light: The automated release from the master branch failed. :rotating_light:
I recommend you give this issue a high priority, so other packages depending on you could benefit from your bug fixes and new features.
You can find below the list of errors reported by semantic-release. Each one of them has to be resolved in order to automatically publish your package. I’m sure you can resolve this 💪.
Errors are usually caused by a misconfiguration or an authentication problem. With each error reported below you will find explanation and guidance to help you to resolve it.
Once all the errors are resolved, semantic-release will release your package the next time you push a commit the master branch. You can also manually restart the failed CI job that runs semantic-release.
If you are not sure how to resolve this, here is some links that can help you:
Usage documentation
Frequently Asked Questions
Support channels
If those don’t help, or if this issue is reporting something you think isn’t right, you can always ask the humans behind semantic-release.
Invalid npm token.
The npm token configured in the NPM_TOKEN environment variable must be a valid token allowing to publish to the registry https://registry.npmjs.org/.
If you are using Two-Factor Authentication, make configure the auth-only level is supported. semantic-release cannot publish with the default auth-and-writes level.
Please make sure to set the NPM_TOKEN environment variable in your CI with the exact value of the npm token.
Good luck with your project ✨
Your semantic-release bot :package::rocket:
:rotating_light: The automated release from the master branch failed. :rotating_light:
I recommend you give this issue a high priority, so other packages depending on you could benefit from your bug fixes and new features.
You can find below the list of errors reported by semantic-release. Each one of them has to be resolved in order to automatically publish your package. I’m sure you can resolve this 💪.
Errors are usually caused by a misconfiguration or an authentication problem. With each error reported below you will find explanation and guidance to help you to resolve it.
Once all the errors are resolved, semantic-release will release your package the next time you push a commit the master branch. You can also manually restart the failed CI job that runs semantic-release.
If you are not sure how to resolve this, here is some links that can help you:
Usage documentation
Frequently Asked Questions
Support channels
If those don’t help, or if this issue is reporting something you think isn’t right, you can always ask the humans behind semantic-release.
Invalid npm token.
The npm token configured in the NPM_TOKEN environment variable must be a valid token allowing to publish to the registry https://registry.npmjs.org/.
If you are using Two-Factor Authentication, make configure the auth-only level is supported. semantic-release cannot publish with the default auth-and-writes level.
Please make sure to set the NPM_TOKEN environment variable in your CI with the exact value of the npm token.
Good luck with your project ✨
Your semantic-release bot :package::rocket:
| gharchive/issue | 2018-11-12T09:49:48 | 2025-04-01T06:45:09.322980 | {
"authors": [
"nmrony"
],
"repo": "nmrony/gtni",
"url": "https://github.com/nmrony/gtni/issues/79",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
838864523 | tests, ovs: check ovs duplicate names with ipv4 dns is supported
Ref: https://bugzilla.redhat.com/1939557
Signed-off-by: Fernando Fernandez Mancera ffmancera@riseup.net
Not stale, if CI passes this is going to be merged. The same patch has been merged on 1.0
| gharchive/pull-request | 2021-03-23T15:46:23 | 2025-04-01T06:45:09.325060 | {
"authors": [
"ffmancera"
],
"repo": "nmstate/nmstate",
"url": "https://github.com/nmstate/nmstate/pull/1545",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2696878285 | [CI] Drop i586 GBS build test
according to Tizen official supported arch list, drop i586 no longer need to test
below list is official supported arch
armv7l
armv7hl
aarch64
x86_64
riscv64
Self evaluation:
Build test: [X]Passed [ ]Failed [ ]Skipped
Run test: [X]Passed [ ]Failed [ ]Skipped
:memo: TAOS-CI Version: 1.5.20200925. Thank you for submitting PR #2807. Please a submit 1commit/1PR (one commit per one PR) policy to get comments quickly from reviewers. Your PR must pass all verificiation processes of cibot before starting a review process from reviewers. If you are new member to join this project, please read manuals in documentation folder and wiki page. In order to monitor a progress status of your PR in more detail, visit http://ci.nnstreamer.ai/.
| gharchive/pull-request | 2024-11-27T03:33:10 | 2025-04-01T06:45:09.353388 | {
"authors": [
"DonghakPark",
"taos-ci"
],
"repo": "nnstreamer/nntrainer",
"url": "https://github.com/nnstreamer/nntrainer/pull/2807",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
772705051 | [#806] [nnapi] FC forward
Add NNAPI FC Layer for android acceleration
This patch adds nnapi fc layer with single model methods.
Self evaluation:
Build test: [X]Passed [ ]Failed [ ]Skipped
Run test: [X]Passed [ ]Failed [ ]Skipped
Signed-off-by: Jihoon Lee jhoon.it.lee@samsung.com
Leaving this for the record purpose.
:memo: TAOS-CI Version: 1.4.20191203. Thank you for submitting PR #821. Please a submit 1commit/1PR (one commit per one PR) policy to get comments quickly from reviewers. Your PR must pass all verificiation processes of cibot before starting a review process from reviewers. If you are new member to join this project, please read manuals in documentation folder and wiki page. In order to monitor a progress status of your PR in more detail, visit http://nnsuite.mooo.com/.
:octocat: cibot: @zhoonit, A builder checker could not be completed because one of the checkers is not completed. In order to find out a reason, please go to http://nnsuite.mooo.com/nntrainer/ci/repo-workers/pr-checker/821-202012221614130.95658802986145-e6be2562c03179decaed3e56abd12c73132a1725/.
Seems need to fix CI issues
This PR won't be proceeded as of now.
This PR won't be proceeded as of now.
| gharchive/pull-request | 2020-12-22T07:12:53 | 2025-04-01T06:45:09.358627 | {
"authors": [
"lhs8928",
"taos-ci",
"zhoonit"
],
"repo": "nnstreamer/nntrainer",
"url": "https://github.com/nnstreamer/nntrainer/pull/821",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
749796939 | How is .txt file generated between generating data and training model?
Hi there,
I am implementing this using the METR-LA data.
The first step to generate the training data seems to ouput a set of 3 .npz, but nothing else. Then the input for training routing is "traffic.txt". I can't see where this .txt file is created, and figure out what it may contain.
Can you help please?
I think i found it.
For anyone have similar issue input ".txt" datasets for training routine can be found here:
https://github.com/laiguokun/multivariate-time-series-data
| gharchive/issue | 2020-11-24T15:12:36 | 2025-04-01T06:45:09.360629 | {
"authors": [
"cmconlan"
],
"repo": "nnzhan/MTGNN",
"url": "https://github.com/nnzhan/MTGNN/issues/16",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
453081196 | Expose getLocationPermission function
Currently there is no way to easily request a new location, the whole passing through of refs in the parent is quite in-elegant. And this only seems to work once...
I have the following usecase: Map that shows the user's location and everytime they press 'relocate to my location' It should popup the location permission box if denied earlier.
Hi, the ref-based solution should work (and it should work repeatedly, if it's not, please provide an example). I don't know how else I should expose the function, do you have any ideas?
My issue was that I only had a single component file, just the app.js. It is a very small app. And since stateless components don't have access to their own 'this'....
For now I fixed it by wrapping it in another container component just to get the ref.
If you want, I can take a look this weekend and see if I can make a PR to expose the function better? I have a few ideas that might work.
Oh, I see. I'm glad you made it work :) PR would be welcome if you have the time :)
Closing as inactive
| gharchive/issue | 2019-06-06T15:00:25 | 2025-04-01T06:45:09.363287 | {
"authors": [
"Pixelatex",
"no23reason"
],
"repo": "no23reason/react-geolocated",
"url": "https://github.com/no23reason/react-geolocated/issues/238",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
2612876253 | Process helps other process once finished
Look into having a process help another process with its word list once it finishes to speed up finding solutions. Maybe divide the remaining words the process has in half and give half to the helper process.
I think an easier way to achieve the same result would be for each process to just take the next starting word from the list instead of breaking the word list into sections. When a process finishes checking its word, it will look at what the next word that needs to be checked is.
| gharchive/issue | 2024-10-25T01:48:56 | 2025-04-01T06:45:09.364496 | {
"authors": [
"noahdgrant"
],
"repo": "noahdgrant/squareword-solver",
"url": "https://github.com/noahdgrant/squareword-solver/issues/2",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2242230299 | ExllamaV2 optimizations
Currently, building the initial token tree is inefficient and can cause slow ingestion of tokens (for example, a JSON schema). This is evident when using models with large vocab sizes such as cohere command-r, gemma, and qwen. Generation locks up and takes hours to process. These commits help optimize that initial building when creating an ExllamaV2 LMFE filter.
Tests: Run command-r with a JSON schema in TabbyAPI using LMFE v0.9.5, would not start generating. With these commits, generation immediately starts.
References #75
Thanks @turboderp for creating these commits.
Merged, thanks @bdashore3 and @turboderp for the contribution!
| gharchive/pull-request | 2024-04-14T15:49:38 | 2025-04-01T06:45:09.369736 | {
"authors": [
"bdashore3",
"noamgat"
],
"repo": "noamgat/lm-format-enforcer",
"url": "https://github.com/noamgat/lm-format-enforcer/pull/88",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1300339054 | Failed to load plugin 'AerialMapDisplay'
I'm trying to intergrade satellite images to Linorobot and display them with rviz, but I keeps getting error.
roslaunch rviz_satellite demo.launch shows this error message:
[gps_fix-1] process has died [pid 17298, exit code 64, cmd /opt/ros/melodic/lib/rostopic/rostopic /gps/fix __name:=gps_fix __log:=/home/eilidar/.ros/log/c4f23c6a-00e4-11ed-b27c-60a4b7c94bc5/gps_fix-1.log].
log file: /home/eilidar/.ros/log/c4f23c6a-00e4-11ed-b27c-60a4b7c94bc5/gps_fix-1*.log
[ERROR] [1657525401.857090717]: PluginlibFactory: The plugin for class 'rviz_plugins/AerialMapDisplay' failed to load. Error: According to the loaded plugin descriptions the class rviz_plugins/AerialMapDisplay with base class type rviz::Display does not exist. Declared types are rviz/Axes rviz/Camera rviz/DepthCloud rviz/Effort rviz/FluidPressure rviz/Grid rviz/GridCells rviz/Illuminance rviz/Image rviz/InteractiveMarkers rviz/LaserScan rviz/Map rviz/Marker rviz/MarkerArray rviz/Odometry rviz/Path rviz/PointCloud rviz/PointCloud2 rviz/PointStamped rviz/Polygon rviz/Pose rviz/PoseArray rviz/PoseWithCovariance rviz/Range rviz/RelativeHumidity rviz/RobotModel rviz/TF rviz/Temperature rviz/WrenchStamped rviz_plugin/AerialMapDisplay rviz_plugin_tutorials/Imu
How did you install the package?
I use git clone to copy this project to my workspace src folder, then run catkin_make at workspace directory.
I only know catkin_make to build, let me try using catkin build.
Sorry, I'm still new to ROS.
Using catkin-tools run catkin build in workspace fix this issue. Thanks for the advice!
Great, happy to help!
| gharchive/issue | 2022-07-11T07:50:18 | 2025-04-01T06:45:09.378003 | {
"authors": [
"EnderDragonEP",
"Timple"
],
"repo": "nobleo/rviz_satellite",
"url": "https://github.com/nobleo/rviz_satellite/issues/101",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
247258833 | Change Name to BitcoinLitecoinAndDogecoinButNotNecessarilyInThatOrderClams
Given that BitcoinCash just forked, it seems clear to me that a chain that uses a chain state as the means of determining initial distribution of a coin is, indeed, a legitimate Bitcoin fork. While I understand that there might end up being great confusion with regard to which Bitcoin is to be used here or there, it also seems obvious to me that Clams is the longest running legitimate Bitcoin fork using the "state of the chain" (which would be a great name for a podcast) as a means for distribution.
As such, I think that Clams should consider changing its name to BitcoinClams, which will provide the longest running POS implementation of Bitcoin the legitimacy of the name, as well as emphasizing ClamCoins as an alternative to the disgusting censorship of Theymos and obvious collusion that Adam Back, his dark corporation Blockstreamm and the rest of the core team have had with Satan.
Alternatively, it might be worth considering that Clams, itself, be forked in much the same way that Clams was originally forked. We could copy the state of clams' blockchain and distribute more equitably the chains currency under the monkier "BitcoinClams" as proposed here, for the sake of better preserving the vision of our LORD and Savior Satoshi Nakamurmo.
Thank you for considering this proposal. I pray that the patron Saint of Blockchains, Saint Catherine, would heap upon your third eye the blessings you deserve for the open source work you have done unto Blockchain Jesus who is and was and is to come forever and ever. And in so considering this proposal, I would hope you give special consideration to all those in the Godhead of the Blockchain that you might not commit some Sebelianistically disparaging comment that would cause Him to unbless your work.
May you fork your mother if you want fork,
Junseth
All hail the Great CLAM; the alpha and omega of forkage.
Closed for future merging and worship.
Joshua, while I see the logic of your argument you must bear in mind that CLAM is just as much a fork of Litecoin and Dogecoin as it is of Bitcoin.
It doesn't seem fair to use "Bitcoin" in the name and not the other two. Or, for that matter, to put Bitcoin first.
How about we compromise and change the name to BitcoinLitecoinAndDogecoinButNotNecessarilyInThatOrderClams instead? A bit like how SegWit2X found a compromise between the people who don't want a hard fork and the people who don't want SegWit by giving everyone both things.
I think this sounds reasonable. Can we reopen this ticket under the title "Change Name to BitcoinLitecoinAndDogecoinButNotNecessarilyInThatOrderClams"?
I changed the title, but I'm too scared of creativecuriosity to reopen it.
Please leave this issue as closed.
Warmest Regards,
Justin Abraham
On Wed, Aug 2, 2017 at 2:00 AM, Chris Moore notifications@github.com
wrote:
I changed the title, but I'm too scared of creativecuriosity to reopen it.
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
https://github.com/nochowderforyou/clams/issues/312#issuecomment-319585902,
or mute the thread
https://github.com/notifications/unsubscribe-auth/ATLS5desXyB-nGEG9_1knKCBmVd3jiCCks5sUB6SgaJpZM4Oqiaq
.
I can see it happen.. CashMeClamside, coming to fruition.
Hail The Clam Lords.
| gharchive/issue | 2017-08-02T02:55:12 | 2025-04-01T06:45:09.385753 | {
"authors": [
"YiumPotato",
"accttotech",
"creativecuriosity",
"dooglus",
"junseth"
],
"repo": "nochowderforyou/clams",
"url": "https://github.com/nochowderforyou/clams/issues/312",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
2622285538 | Organize github actions workflows and expand build/test matrices
The files in https://github.com/nod-ai/SHARK-Platform/tree/main/.github/workflows are proliferating with overlapping workflows while missing some coverage we do care about.
GitHub has tools to manage this scaling:
https://docs.github.com/en/actions/writing-workflows/choosing-what-your-workflow-does/running-variations-of-jobs-in-a-workflow
https://docs.github.com/en/actions/writing-workflows/workflow-syntax-for-github-actions
We generally want to build/test/release across:
Python versions: 3.10, 3.11, 3.12, 3.13, 3.13t
Operating systems: Linux, Windows (macOS too?)
Accelerators/devices: CPU, GPU (MI300, others)
Other configurations: address sanitizer (ASan), instrumented (Tracy), bundled or bring-your-own deps like IREE, etc.
(Stretch) architectures: x86, aarch64
Each subproject (sharktank, shortfin, tuner) is currently self-contained, so we can keep at least one workflow per subproject. Common tasks like running pre-commit and mypy type checking could stay in https://github.com/nod-ai/SHARK-Platform/blob/main/.github/workflows/pre-commit.yaml (note that iree-turbine puts MyPy Type Checking as the last step in multiple build/test workflows: https://github.com/iree-org/iree-turbine/blob/2b45c0fdec21f69b9cc088ec9852e98f5219c37c/.github/workflows/ci.yaml#L65-L68)
As part of this we could have these workflows test both nightly IREE and iree-turbine packages and pinned versions. I think blocking CI should use pinned versions, while the currently mode of testing with the latest nightly/source packages can drop to being allowed to fail.
| gharchive/issue | 2024-10-29T20:41:57 | 2025-04-01T06:45:09.391439 | {
"authors": [
"ScottTodd"
],
"repo": "nod-ai/SHARK-Platform",
"url": "https://github.com/nod-ai/SHARK-Platform/issues/357",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2206093005 | [Tracking] E2E Tests (torch->onnx->torch->linalg)
This issue tracks the E2E op tests for the OnnxToLinalg lowering.
Failing Tests (count: 547): as on 08/04/24
Higher Priority:
Failure - incorrect numerics
[ ] "ElementwiseAtan2TensorIntModule_basic",
[ ] "ElementwiseLog10IntModule_basic",
[ ] "ElementwiseLog2IntModule_basic",
[ ] "ElementwiseSeluModule_basic",
[ ] "FlipModuleStaticShape_basic",
[ ] "FlipNegativeIndexModule_basic",
[ ] "HardsigmoidModule_basic",
[ ] "HardsigmoidRandomModule_basic",
[ ] "PixelShuffleModuleStaticRank4Float32_basic",
[ ] "SliceCopyEndGreaterThanDimSize_Module_basic",
[ ] "SliceCopyNegative_Module_basic",
[ ] "SliceCopyNonZeroDim_Module_basic",
[ ] "SliceCopy_Module_basic",
[ ] "TupleModule_basic",
Failure - incorrect shape
[ ] "ArangeStartOutDtypeModule_basic",
[ ] "ArangeStartOutViewModule_basic",
[ ] "BroadcastDynamicDimModule_basic",
[ ] "MoveDimIntNegativeIndexModule_basic",
[ ] "ViewSizeFromOtherTensor_basic",
Failure - onnx_lowering: onnx.RandomNormal
[ ] "RandnDtypeDeviceModule_basic",
[ ] "RandnGeneratorF64Module_basic",
[ ] "RandnGeneratorModule_basic",
[ ] "RandnModule_basic",
Failure - onnx_lowering: onnx.RandomNormalLike
[x] "RandnLikeDtypeModule_basic",
[ ] "RandnLikeModule_basic",
Failure - onnx_lowering: onnx.RandomUniform
[x] "RandIntLowDtypeModule_basic",
[x] "RandIntLowModule_basic",
Failure - onnx_lowering: onnx.RandomUniformLike
[ ] "BernoulliFloatModule_basic",
[ ] "BernoulliPModule_basic",
[ ] "BernoulliTensorModule_basic",
[x] "RandLikeDtypeModule_basic",
[x] "RandLikeModule_basic",
[x] "RandModule_basic",
Failure - onnx_lowering: onnx.ReduceProd
[ ] #609
[ ] #610
[ ] "DropoutTrainStaticShapeModule_basic",
[ ] "NativeDropoutTrainModule_basic",
[ ] "NativeDropoutTrainStaticShapeModule_basic",
[ ] #592
[ ] "StdCorrectionLargeInputModule_basic",
[ ] "VarCorrectionLargeInputModule_basic",
Failure - onnx_lowering: onnx.Pad
[ ] "ReflectionPad1dModule2dInput_Right",
[ ] "ReflectionPad1dModule2dInput_basic",
[ ] "ReflectionPad1dModule3dInput_Left",
[ ] "ReflectionPad1dModule3dInput_basic",
[ ] "ReflectionPad2dModule_Bottom",
[ ] "ReflectionPad2dModule_Left",
[ ] "ReflectionPad2dModule_Right",
[ ] "ReflectionPad2dModule_Top",
[ ] "ReflectionPad2dModule_basic",
[ ] "ReplicationPad2dModule_basic",
[ ] "ReplicationPad2dModule_bottom0",
[ ] "ReplicationPad2dModule_left0",
[ ] "ReplicationPad2dModule_right0",
[ ] "ReplicationPad2dModule_top0",
Failure - onnx_lowering: onnx.ScatterND
[ ] "IndexPut1DFloatAccumulateModule_basic",
[ ] "IndexPut1DFloatNonAccumulateModule_basic",
[ ] "IndexPut1DIntAccumulateModule_basic",
[ ] "IndexPut1DIntNonAccumulateModule_basic",
[ ] "IndexPut2DFloatAccumulateModule_basic",
[ ] "IndexPut2DFloatNonAccumulateModule_basic",
[ ] "IndexPut2DIntAccumulateModule_basic",
[ ] "IndexPut2DIntNonAccumulateModule_basic",
[ ] "IndexPut3DFloatAccumulateModule_basic",
[ ] "IndexPut3DFloatNonAccumulateModule_basic",
[ ] "IndexPut3DIntAccumulateModule_basic",
[ ] "IndexPut3DIntNonAccumulateModule_basic",
[ ] "IndexPutHackedTwin1DFloatAccumulateModule_basic",
[ ] "IndexPutHackedTwin1DFloatNonAccumulateModule_basic",
[ ] "IndexPutHackedTwin1DIntAccumulateModule_basic",
[ ] "IndexPutHackedTwin1DIntNonAccumulateModule_basic",
[ ] "IndexPutHackedTwin2DFloatAccumulateModule_basic",
[ ] "IndexPutHackedTwin2DFloatNonAccumulateModule_basic",
[ ] "IndexPutHackedTwin2DIntAccumulateModule_basic",
[ ] "IndexPutHackedTwin2DIntNonAccumulateModule_basic",
[ ] "IndexPutHackedTwin3DFloatAccumulateModule_basic",
[ ] "IndexPutHackedTwin3DFloatNonAccumulateModule_basic",
[ ] "IndexPutHackedTwin3DIntAccumulateModule_basic",
[ ] "IndexPutHackedTwin3DIntNonAccumulateModule_basic",
Failure - onnx_lowering: onnx.ScatterElements
[ ] "ScatterSrcModule_basic",
[ ] "ScatterSrcStaticModule_basic",
[ ] "ScatterValueFloatModule_basic",
[ ] "ScatterValueIntModule_basic",
Failure - onnx_lowering: onnx.Squeeze
[ ] "SqueezeModule_allUnitDim",
[ ] "SqueezeModule_broadcast",
[ ] "SqueezeModule_static",
Failure - onnx_lowering: onnx.MaxPool
[ ] "MaxPool2dWithIndicesAllNegativeValuesModule_basic",
[ ] "MaxPool2dWithIndicesNonDefaultPaddingModule_basic",
[ ] "MaxPool2dWithIndicesStaticModule_basic",
Failure - onnx_lowering: onnx.ReduceL1
[ ] "ReduceL1NormModule_basic",
[ ] "ReduceL1NormWithDTypeModule_basic",
Failure - onnx_lowering: onnx.Resize
[ ] "UpSampleNearest2dDynamicSize_basic",
[ ] "UpSampleNearest2dStaticSize_basic",
Failure - onnx_lowering: onnx.AveragePool
[ ] "AdaptiveAvgPool1dGeneralDynamicNoBatches_basic",
[ ] "AvgPool2dDivisorOverrideModule_basic",
Failure - onnx_lowering: onnx.SoftmaxCrossEntropyLoss
[ ] "CrossEntropyLossModule_basic",
[ ] "CrossEntropyLossNoReductionModule_basic",
Failure - onnx_lowering: onnx.Cast
[ ] #554
[ ] #555
Failure - onnx_lowering: onnx.ReduceL2
[ ] "ReduceL2NormModule_basic",
Failure - onnx_lowering: onnx.ReduceSum
[ ] "ReduceL3NormKeepDimModule_basic",
Failure - cast error
[ ] "PermuteNegativeIndexModule_basic",
Failure - incorrect dtype
[ ] "ReduceMaxAlongDimUnsignedInt_basic",
Failure - onnx_lowering: onnx.Clip
[ ] "NormalizeModule_basic",
Failure - onnx_lowering: onnx.OneHot
[ ] "OneHotModule_basic",
Failure - torch.aten.view lower
[ ] "IndexTensorDyanmicInputContiguousWithNoneModule_basic",
[ ] "IndexTensorDyanmicInputNonContiguousWithNoneModule_basic",
[ ] "IndexTensorHackedTwinMultiInputNonContiguousMultipleStaticDims_basic",
[ ] "IndexTensorMultiInputContiguousCenter_basic",
[ ] "IndexTensorMultiInputNonContiguousMultipleStaticDims_basic",
[ ] "IndexTensorMultiInputNonContiguous_basic",
[ ] "IndexTensorMultiInputOneDim_basic",
[ ] "IndexTensorMultiInputThreeIndexers_basic",
[ ] "IndexTensorMultiInput_basic",
[ ] "ViewFlattenAndExpandModule_basic",
[ ] "ViewSizeDimFollowedByCollapsedOnesModule_basic",
[ ] "ViewSizeDimFollowedByExpandedOnesModule_basic",
[ ] "ViewSizeDimLedAndFollowedByCollapsedOnesModule_basic",
[ ] "ViewSizeDimLedAndFollowedByExpandedOnesModule_basic",
[ ] "ViewSizeDimLedByCollapsedOnesModule_basic",
[ ] "ViewSizeDimLedByExpandedOnesModule_basic",
Failure - unknown
[ ] "Conv2dWithPaddingDilationStrideStaticModule_depthwise_multiplier",
[ ] "CopyWithDifferentDTypesAndSizesModule_basic",
[ ] "CopyWithDifferentDTypesModule_basic",
[ ] "CosineSimilarityStaticBroadcastModule_basic",
[ ] "CumsumInputDtypeInt32Module_basic",
[ ] "ElementwiseAcosIntModule_basic",
[ ] "ElementwiseAsinIntModule_basic",
[ ] "ElementwiseAtanTensorIntModule_basic",
[ ] "ElementwiseCosIntModule_basic",
[ ] "ElementwiseDivRoundingModeTruncModule_basic",
[ ] "ElementwiseErfIntModule_basic",
[ ] #585
[ ] "ElementwiseLogIntModule_basic",
[x] #588
[ ] "ElementwiseSigmoidIntModule_basic",
[ ] "ElementwiseSinIntModule_basic",
[ ] "ElementwiseTanIntModule_basic",
[ ] "ElementwiseUnaryIntModule_basic",
[ ] "EmbeddingModuleF16_basic",
[ ] "EmbeddingModuleI32_basic",
[ ] "EmbeddingModuleI64_basic",
[ ] "FlattenDynamicModule_basic",
[ ] "GluStaticModule_basic",
[ ] "IndexTensorHackedTwinModule3dInput_basic",
[ ] "IndexTensorHackedTwinModule_basic",
[ ] "IndexTensorModule3dInput_basic",
[ ] "IndexTensorModule_basic",
[ ] "IndexTensorMultiInputContiguousOneDimDynamic_basic",
[ ] "IndexTensorMultiInputNonContiguousDynamic_basic",
[ ] "IndexTensorMultiInputNonContiguousOneDimDynamic_basic",
[ ] "IndexTensorSelectDimModule_basic",
[ ] "MaskedFillTensorFloatValueModule_basic",
[ ] "ReduceAllDimEmpty_basic",
[ ] "ReduceAllDimFloat_basic",
[ ] "ReduceAllDimInt_basic",
[ ] "ReduceMinAlongDimUnsignedInt_basic",
Failure - onnx_import
[x] #553
[ ] "DiagonalModule_nonsquare",
[ ] "DiagonalModule_transposed",
[ ] "DiagonalModule_with_dims",
[ ] "DiagonalModule_with_dims_and_offset",
[ ] "DiagonalModule_with_negative_dims",
[ ] "DiagonalModule_with_offset",
[ ] "AtenDiagEmbedDefaultDiag_basic",
[ ] "AtenDiagEmbedDimDiag_basic",
[ ] "AtenDiagEmbedOffsetDiag_basic",
[ ] "AtenDiagEmbedRevDimDiag_basic",
[ ] "AtenDiagEmbedNegOffsetDiag_basic",
[ ] "AtenDiagEmbedNonDefault4DDiag_basic",
[ ] "ScatterReduceFloatMaxModuleIncludeSelf",
[ ] "ScatterReduceFloatMinModuleIncludeSelf",
[ ] "ScatterReduceFloatProdModuleIncludeSelf",
[ ] "ScatterReduceFloatSumModuleIncludeSelf",
[ ] "ScatterReduceIntMaxModuleIncludeSelf",
[ ] "ScatterReduceIntMinModuleIncludeSelf",
[ ] "ScatterReduceIntProdModuleIncludeSelf",
[ ] "ScatterReduceIntSumModuleIncludeSelf",
[ ] "TileBigDimsSizeModule_basic",
[ ] "TileSmallDimsSizeModule_basic",
[ ] "LinalgNormKeepDimModule_basic",
[ ] "LinalgNormModule_basic",
Failure - "RuntimeError: linalg.cross: inputs dimension 1 must have length 3. Got 1 and 1"
[ ] "AtenLinalgCrossDynamic_basic"
Crashing tests
[ ] "FakeQuantizePerTensorAffineModule_basic",
[ ] "FakeQuantizePerTensorAffineDynamicShapeModule_basic",
Lower Priority:
Failure - onnx_export
[ ] "AdaptiveAvgPool1dGeneralDynamic_basic",
[ ] "AdaptiveAvgPool1dNonUnitOutputSizeDynamicModule_basic",
[ ] "AdaptiveAvgPool1dStaticLargerOutput_basic",
[ ] "AdaptiveAvgPool2dNonUnitOutputSizeDynamicModule_basic",
[ ] "AdaptiveMaxPool2dDynamicWithIndices_basic",
[ ] "AdaptiveMaxPool2dDynamic_basic",
[ ] "AdaptiveMaxPool2dStaticWithIndices_basic",
[ ] "AdaptiveMaxPool2dStatic_basic",
[ ] "AdaptiveMaxPool3dStatic_basic",
[ ] "AdaptiveMaxPool3dStaticWithIndices_basic",
[ ] "AdaptiveMaxPool3dDynamic_basic",
[ ] "AdaptiveMaxPool3dDynamicWithIndices_basic",
[ ] "AdaptiveMaxPool3dDynamicNoBatch_basic",
[ ] "AdaptiveMaxPool2dDynamicNoBatch_basic",
[ ] "AdaptiveMaxPool1dStatic_basic",
[ ] "AdaptiveMaxPool1dDynamic_basic",
[ ] "AdaptiveMaxPool1dDynamicNoBatch_basic",
[ ] "AdaptiveAvgPool3dDynamic_basic",
[ ] "AdaptiveAvgPool3dDynamicNoBatch_basic",
[ ] "AdaptiveAvgPool2dDynamic_basic",
[ ] "AdaptiveAvgPool2dDynamicNoBatch_basic",
[ ] "AddCDivModule_basic",
[ ] "AddIntModule_basic",
[ ] "Add_Module_basic",
[ ] "AllBoolFalseModule_basic",
[ ] "AllBoolTrueModule_basic",
[ ] "AnyBoolFalseModule_basic",
[ ] "AnyBoolTrueModule_basic",
[ ] "AtenComplex64Module_basic",
[ ] "AtenComplexImagModule_basic",
[ ] "AtenComplexRealModule_basic",
[ ] "AtenComplexViewModule_basic",
[ ] "AtenEmbeddingBagStaticModule_basic",
[ ] "AtenEmbeddingBagSumExample_basic",
[ ] "AtenFloatScalarModule_basic",
[ ] "AtenIntBoolOpConstFalseModule_basic",
[ ] "AtenIntBoolOpConstTrueModule_basic",
[ ] "AtenIntBoolOpModule_basic",
[ ] "AtenIntTensorByteDtypeModule_basic",
[ ] "AtenIntTensorCharDtypeModule_basic",
[ ] "AtenItemFpOpModule_basic",
[ ] "AtenItemIntOpModule_basic",
[ ] "AtenMmQuint8_basic",
[ ] "AtenRealView128Module_basic",
[ ] "AtenRealView64Module_basic",
[ ] "AtenSubFloatModule_basic",
[ ] "AtenTopKModule_basic",
[ ] "AtenTopKSmallestModule_basic",
[ ] "Aten_EmbeddingBagExample_basic",
[ ] "AvgPool2dWithoutPadModule_basic",
[ ] "BatchMlpLayerModule_basic",
[ ] "BincountMinlengthModule_basic",
[ ] "BincountModule_basic",
[ ] "BincountStaticSizeModule_basic",
[ ] "BoolFloatConstantModule_basic",
[ ] "BoolFloatFalseModule_basic",
[ ] "BoolFloatTrueModule_basic",
[ ] "BoolIntConstantModule_basic",
[ ] "BoolIntFalseModule_basic",
[ ] "BoolIntTrueModule_basic",
[ ] "CeilFloatModule_basic",
[ ] "ChunkListUnpackDynamic_Module_basic",
[ ] "ChunkListUnpackUnevenDynamic_Module_basic",
[ ] "CollapseAllDimensionsModule_basic",
[ ] "CollapseFullDynamicModule_basic",
[ ] "CollapsePartialDynamicModule_basic",
[ ] "CollapseRank1DynamicModule_basic",
[ ] "CollapseStaticModule_basic",
[ ] "ConstantBoolParameterModule_basic",
[ ] "ContainsIntList_False",
[ ] "ContainsIntList_True",
[ ] "Conv1dModule_basic",
[ ] "Conv2dBiasNoPaddingModule_basic",
[ ] "Conv2dModule_basic",
[ ] "Conv2dNoPaddingModule_basic",
[ ] "Conv2dQInt8Module_basic",
[ ] "Conv2dWithPaddingDilationStrideModule_basic",
[ ] "Conv2dWithPaddingModule_basic",
[ ] "Conv3dModule_basic",
[ ] "ConvTbcModule_basic",
[ ] "Conv_Transpose2dModule_basic",
[ ] "Convolution2DModule_basic",
[ ] "Convolution2DStridedModule_basic",
[ ] "ConvolutionBackwardModule2DPadded_basic",
[ ] "ConvolutionBackwardModule2DStatic_basic",
[ ] "ConvolutionBackwardModule2DStrided_basic",
[ ] "ConvolutionBackwardModule2D_basic",
[ ] "ConvolutionModule2DGroups_basic",
[ ] "ConvolutionModule2DTransposeNonUnitOutputPadding_basic",
[ ] "ConvolutionModule2DTransposeStrided_basic",
[ ] "ConvolutionModule2DTranspose_basic",
[ ] "DivFloatModule_basic",
[ ] "DivIntModule_basic",
[ ] "ElementwiseAcoshIntModule_basic",
[ ] "ElementwiseAcoshModule_basic",
[ ] "ElementwiseAsinhIntModule_basic",
[ ] "ElementwiseAsinhModule_basic",
[ ] "ElementwiseAtanhIntModule_basic",
[ ] "ElementwiseAtanhModule_basic",
[ ] "ElementwiseAtenIsneginfOpModule_basic",
[ ] "ElementwiseAtenIsposinfOpModule_basic",
[ ] "ElementwiseBitwiseAndModule_basic",
[ ] "ElementwiseBitwiseAndScalarInt32Module_basic",
[ ] "ElementwiseBitwiseAndScalarInt64Module_basic",
[ ] "ElementwiseBitwiseAndScalarInt8Module_basic",
[ ] "ElementwiseBitwiseAndStaticShapeModule_basic",
[ ] "ElementwiseBitwiseLeftShiftInt32Module_basic",
[ ] "ElementwiseBitwiseLeftShiftInt64Module_basic",
[ ] "ElementwiseBitwiseLeftShiftInt8Module_basic",
[ ] "ElementwiseBitwiseNotInt32Module_basic",
[ ] "ElementwiseBitwiseNotInt64Module_basic",
[ ] "ElementwiseBitwiseOrModule_basic",
[ ] "ElementwiseBitwiseOrStaticShapeModule_basic",
[ ] "ElementwiseBitwiseRightShiftInt32Module_basic",
[ ] "ElementwiseBitwiseRightShiftInt64Module_basic",
[ ] "ElementwiseBitwiseRightShiftInt8Module_basic",
[ ] "ElementwiseBitwiseXorModule_basic",
[ ] "ElementwiseBitwiseXorStaticShapeModule_basic",
[ ] "ElementwiseCoshIntModule_basic",
[ ] "ElementwiseCoshModule_basic",
[ ] "ElementwiseDequantizePerChannelModule_basic",
[ ] "ElementwiseDequantizePerTensorModule_basic",
[ ] "ElementwiseEluNonDefaultModule_basic",
[ ] "ElementwiseExpm1IntModule_basic",
[ ] "ElementwiseExpm1Module_basic",
[ ] "ElementwiseMulTensorComplexModule_basic",
[ ] "ElementwiseOrTensorModule_basic",
[ ] "ElementwiseOrTensorStaticShapeModule_basic",
[ ] "ElementwiseQuantizePerTensorModule_basic",
[ ] "ElementwiseQuantizePerTensorUIntModule_basic",
[ ] "ElementwiseRemainderTensorModule_Int_basic",
[ ] "ElementwiseFmodTensor_Int_basic",
[ ] "EmptyStridedModule_basic",
[ ] "EmptyStridedSizeIntStrideModule_basic",
[ ] "EqIntModule_basic",
[ ] "ExponentialModule_basic",
[ ] "FloatImplicitModule_basic",
[ ] "GeFloatIntModule_basic",
[ ] "GeFloatModule_basic",
[ ] "GeIntModule_basic",
[ ] "GeluBackwardModule_basic",
[ ] "GtFloatIntModule_basic",
[ ] "GtIntModule_basic",
[ ] "HardtanhBackward_basic",
[ ] "IndexPutImpl1DFloatAccumulateModule_basic",
[ ] "IndexPutImpl1DFloatNonAccumulateModule_basic",
[ ] "IndexPutImpl1DIntAccumulateModule_basic",
[ ] "IndexPutImpl1DIntNonAccumulateModule_basic",
[ ] "IndexPutImpl2DFloatAccumulateModule_basic",
[ ] "IndexPutImpl2DFloatNonAccumulateModule_basic",
[ ] "IndexPutImpl2DIndexModule_basic",
[ ] "IndexPutImpl2DNoneIndexStaticModule_basic",
[ ] "IndexPutImpl3DFloatAccumulateModule_basic",
[ ] "IndexPutImpl3DFloatNonAccumulateModule_basic",
[ ] "IndexPutImplIndexWithNoneModule_basic",
[ ] "IntFloatModule_basic",
[ ] "IntImplicitModule_basic",
[ ] "IouOfModule_basic",
[ ] "IsFloatingPointFloat_True",
[ ] "IsFloatingPointInt_False",
[ ] "IscloseStaticModuleTrue_basic",
[ ] "IscloseStaticModule_basic",
[ ] "LeakyReluBackwardModule_basic",
[ ] "LeakyReluBackwardStaticModule_basic",
[ ] "LenStrModule_basic",
[ ] "LiftFreshCopyModule_basic",
[ ] "LogSoftmaxBackwardModule_basic",
[ ] "MaxPool2dCeilModeTrueModule_basic",
[ ] "MaxPool2dModule_basic",
[ ] "MaxPool2dWithIndicesAllOnesModule_basic",
[ ] "MaxPool2dWithIndicesBackwardDynamic3DModule_basic",
[ ] "MaxPool2dWithIndicesBackwardDynamic4DModule_basic",
[ ] "MaxPool2dWithIndicesBackwardStatic3DModule_basic",
[ ] "MaxPool2dWithIndicesBackwardStatic4DModule_basic",
[ ] "MaxPool2dWithIndicesCeilModeTrueModule_basic",
[ ] "MaxPool2dWithIndicesFullSizeKernelModule_basic",
[ ] "MaxPool2dWithIndicesModule_basic",
[ ] "MaxPool2dWithIndicesNonDefaultDilationModule_basic",
[ ] "MaxPool2dWithIndicesNonDefaultParamsModule_basic",
[ ] "MaxPool2dWithIndicesNonDefaultStrideModule_basic",
[ ] "MaxPool3dCeilModeTrueModule_basic",
[ ] "MaxPool3dLargeDatadModule_basic",
[ ] "MaxPool3dModuleRandomSimple_basic",
[ ] "MaxPool3dModule_basic",
[ ] "MeanDimEmptyDimModule_basic",
[ ] "Mlp1LayerModule_basic",
[ ] "Mlp2LayerModuleNoBias_basic",
[ ] "Mlp2LayerModule_basic",
[ ] "MulFloatModule_basic",
[ ] "MulIntModule_basic",
[ ] "NarrowHorizontalTest2_basic",
[ ] "NarrowHorizontalTest_basic",
[ ] "NarrowTensorHorizontalModule_basic",
[ ] "NarrowTensorVerticalModule_basic",
[ ] "NarrowVerticalTest2_basic",
[ ] "NarrowVerticalTest_basic",
[ ] "NativeBatchNorm1DModule_basic",
[ ] "NativeBatchNorm2DModule_basic",
[ ] "NativeBatchNorm3DModule_basic",
[ ] "NativeBatchNormNoneWeightModule_basic",
[ ] "NativeDropoutEvalFloatModule_basic",
[ ] "NativeGroupNormBackwardModule_basic",
[ ] "NativeGroupNormModule_basic",
[ ] "NativeLayerNormDynamicModule_basic",
[ ] "NeFloatIntModule_basic",
[ ] "NeIntModule_basic",
[ ] "NewEmptyStridedModuleDefaultDtype_basic",
[ ] "NllLossModuleBackward1DMeanWeight_basic",
[ ] "NllLossModuleBackward1DMean_basic",
[ ] "NllLossModuleBackward1DSumWeight_basic",
[ ] "NllLossModuleBackward1DSum_basic",
[ ] "NllLossModuleBackward1DWeight_basic",
[ ] "NllLossModuleBackward1D_basic",
[ ] "NllLossModuleBackwardMeanWeight_basic",
[ ] "NllLossModuleBackwardMean_basic",
[ ] "NllLossModuleBackwardSumWeight_basic",
[ ] "NllLossModuleBackwardSum_basic",
[ ] "NllLossModuleBackwardWeight_basic",
[ ] "NllLossModuleBackward_basic",
[ ] "NllLossModuleBackward_ignore_index",
[ ] "NllLossModule_1D_basic",
[ ] "NllLossModule_basic",
[ ] "NllLossModule_ignore_index_out_of_bounds_basic",
[ ] "NllLossModule_mean_basic",
[ ] "NllLossModule_sum_basic",
[ ] "NormScalarModule_basic",
[ ] "NormScalarOptDimKeepDimModule_basic",
[ ] "NormScalarOptDimModule_basic",
[ ] "NormalFunctionalModule_basic",
[ ] "NumToTensorFloatModule_basic",
[ ] "NumToTensorIntModule_basic",
[ ] "NumelModule_basic",
[ ] "NumelZeroRankModule_basic",
[ ] "PixelShuffleModuleFullDynamic_basic",
[ ] "PixelShuffleModuleSpatiallyDynamic_basic",
[ ] "PixelShuffleModuleSpatiallyStatic_basic",
[ ] "PixelShuffleModuleStaticRank3Int64_basic",
[ ] "PowIntFloatModule_basic",
[ ] "PrimMaxIntModule_basic",
[ ] "PrimMinIntDynamicModule_basic",
[ ] "PrimMinIntModule_basic",
[ ] "PrimsConvertElementTypeModule_basic",
[ ] "PrimsSqueezeEmptyDimensionsModule_basic",
[ ] "PrimsSqueezeModule_basic",
[ ] "PrimsViewOfModule_basic",
[ ] "PrimsViewOfZeroRankModule_basic",
[ ] "RandIntDtypeModule_basic",
[ ] "RandIntModule_basic",
[ ] "RandIntPinMemoryModule_basic",
[ ] "ReshapeAliasCollapseModule_basic",
[ ] "ReshapeAliasExpandModule_basic",
[ ] "ReshapeExpandModule_basic",
[ ] "ScalarConstantTupleModule_basic",
[ ] "ScalarImplicitFloatModule_basic",
[ ] "ScalarImplicitIntModule_basic",
[ ] "ScatterReduceFloatMaxModule",
[ ] "ScatterReduceFloatMeanModule",
[ ] "ScatterReduceFloatMeanModuleIncludeSelf",
[ ] "ScatterReduceFloatMinModule",
[ ] "ScatterReduceFloatProdModule",
[ ] "ScatterReduceFloatSumModule",
[ ] "ScatterReduceIntMaxModule",
[ ] "ScatterReduceIntMeanModule",
[ ] "ScatterReduceIntMeanModuleIncludeSelf",
[ ] "ScatterReduceIntMinModule",
[ ] "ScatterReduceIntProdModule",
[ ] "ScatterReduceIntSumModule",
[ ] "SelectScattertModule_basic",
[ ] "SelectScattertStaticModule_basic",
[ ] "SliceEndSleStartModule_basic",
[ ] "SliceOutOfUpperBoundIndexModule_basic",
[ ] "SliceScatterModule_basic",
[ ] "SliceScatterNegativeDimModule_basic",
[ ] "SliceScatterNegativeEndModule_basic",
[ ] "SliceScatterStaticModule_basic",
[ ] "SliceScatterStepVariationModule_basic",
[ ] "SliceScatterZeroDimModule_basic",
[ ] "SliceStartEqEndModule_basic",
[ ] "SoftmaxBackwardModule_basic",
[ ] "SortIntListReverse_basic",
[ ] "SortIntList_basic",
[ ] "SplitDimDynamicModule_basic",
[ ] "SplitDimStaticModule_basic",
[ ] "SqrtIntConstantModule_basic",
[ ] "SqrtIntModule_basic",
[ ] "StdCorrectionEmptyDimModule_basic",
[ ] "StdDimEmptyDimModule_basic",
[ ] "SubFloatModule_basic",
[ ] "SubIntModule_basic",
[ ] "TanhBackward_basic",
[ ] "TensorToBoolZeroRank_basic",
[ ] "TensorToBool_basic",
[ ] "TensorToFloatZeroRank_basic",
[ ] "TensorToFloat_basic",
[ ] "TensorToIntZeroRank_basic",
[ ] "TensorToInt_basic",
[ ] "TestMultipleTensorAndPrimitiveTypesReturn_basic",
[ ] "Threshold1dFloatModule_basic",
[ ] "Threshold1dIntI32Module_basic",
[ ] "Threshold1dIntModule_basic",
[ ] "Threshold2dFloatModule_basic",
[ ] "Threshold2dIntModule_basic",
[ ] "Threshold3dFloatModule_basic",
[ ] "Threshold3dIntModule_basic",
[ ] "ThresholdBackward1dFloatModule_basic",
[ ] "ThresholdBackward1dIntModule_basic",
[ ] "ThresholdBackward1dMixedModule_basic",
[ ] "ThresholdBackward2dFloatModule_basic",
[ ] "ThresholdBackward2dIntModule_basic",
[ ] "ThresholdBackward2dMixedModule_basic",
[ ] "ThresholdBackward3dFloatModule_basic",
[ ] "ThresholdBackward3dIntModule_basic",
[ ] "ThresholdBackward3dMixedModule_basic",
[ ] "ToCopyBoolDTypeStaticModule_basic",
[ ] "ToCopyModule_basic",
[ ] "ToCopyWithDTypeFalsePinMemoryModule_basic",
[ ] "ToCopyWithDTypeModule_basic",
[ ] "TorchPrimLoopForLikeModule_basic",
[ ] "TorchPrimLoopWhileLikeModule_basic",
[ ] "TraceModule_basic",
[ ] "TraceModule_empty",
[ ] "TraceModule_nonsquare",
[ ] "TraceSignedIntModule_basic",
[ ] "TraceUnsignedIntModule_basic",
[ ] "TraceUnsignedIntModule_empty",
[ ] "UniformModule_basic",
[ ] "UniformNoCorrelationModule_basic",
[ ] "UniformStaticShapeModule_basic",
[ ] "UnsafeIndexPutHackedTwin1DFloatNonAccumulateModule_basic",
[ ] "UnsafeView1DFoldModule_basic",
[ ] "UnsafeViewCollapseDynamicWithAtenSizeIntModule_basic",
[ ] "UnsafeViewCollapseModule_basic",
[ ] "UnsafeViewDynamicExpandModule_basic",
[ ] "UnsafeViewDynamicExpandWithAtenSizeIntModule_basic",
[ ] "UnsafeViewExpandModule_basic",
[ ] "UpSampleNearest2dBackwardScalesNone_basic",
[ ] "UpSampleNearest2dBackward_basic",
[ ] "UpSampleNearest2dDynamicFactor_basic",
[ ] "UpSampleNearest2dStaticFactor_basic",
[ ] "UpSampleNearest2d_basic",
[ ] "VarCorrectionEmptyDimModule_basic",
[ ] "VarDimEmptyDimModule_basic",
[ ] "ViewCollapseDynamicWithAtenSizeIntModule_basic",
[ ] "ViewCollapseModule_basic",
[ ] "ViewDynamicExpandCollapseModule_basic",
[ ] "ViewDynamicExpandCollapseWithAtenIntModule_basic",
[ ] "ViewDynamicExpandModule_basic",
[ ] "ViewDynamicExpandWithAtenSizeIntModule_basic",
[ ] "ViewExpandDynamicDimModule_basic",
[ ] "ViewNoChange1dModule_basic",
[ ] "ViewNoChange2dModule_basic",
[ ] "ViewNoChange3dModule_basic",
[ ] "_Convolution2DAllFalseModule_basic",
[ ] "_Convolution2DBenchmarkModule_basic",
[ ] "_Convolution2DCudnnModule_basic",
[ ] "_Convolution2DDeterministicModule_basic",
[ ] "_Convolution2DTF32Module_basic",
[ ] "_ConvolutionDeprecated2DAllFalseModule_basic",
[ ] "_ConvolutionDeprecated2DBenchmarkModule_basic",
[ ] "_ConvolutionDeprecated2DCudnnModule_basic",
[ ] "_ConvolutionDeprecated2DDeterministicModule_basic",
[ ] "_SoftmaxModule_basic",
Fixed:
[x] #550
[x] #551
[x] #552
[x] "AdaptiveAvgPool1dNonUnitOutputSizeStaticModule_basic",
[x] "AdaptiveAvgPool1dStaticEvenMultiple_basic",
[x] "AdaptiveAvgPool2dNonUnitOutputSizeStaticModule_basic",
[x] "AvgPool1dFloatModule_basic",
[x] "AvgPool1dIntModule_basic",
[x] "AvgPool1dStaticModule_basic",
[x] "AvgPool2dCeilModeTrueModule_basic",
[x] "AvgPool2dFloatModule_basic",
[x] "AvgPool2dIntModule_basic",
[x] "AvgPool2dStaticModule_basic",
[x] #556
[x] "EinsumStaticContractRhsModule_basic",
[x] "EinsumStaticFourDimensionModule_basic",
[x] "EinsumStaticModule_basic",
[x] "MseLossSumReductionWithDifferentElemTypeModule_basic",
[x] "ReduceL3NormAllDimsModule_basic",
[x] "ReduceSumDtypeFloatModule_basic",
[x] "ReduceSumDtypeIntModule_basic",
[x] "ReduceSumElementTypeBoolModule_basic",
[x] "ReduceSumFloatModule_basic",
[x] "ReduceSumSignedIntModule_basic",
[x] "ReduceSumUnsignedIntModule_basic",
[x] "SortTensorDescending_basic",
[x] "SortTensorInteger_basic",
[x] "SortTensorNegativeDimension_basic",
[x] "SortTensorSpecificDimension_basic",
[x] "SortTensor_basic",
[x] #590
[x] #591
[x] "BucketizeTensorStaticFloatModule_basic",
[x] "BucketizeTensorStaticModule_basic",
[x] "ElementwiseUnsqueezeNegDimsModule_basic",
[x] "GroupNormModule_basic",
[x] "TensorsStackNegativeDimModule_basic",
[x] "TensorsStackPromoteDTypeModule_basic",
This tracker is no more maintained. @zjgarvey I think you've been reporting the status of these tests. What should we do with this tracker?
| gharchive/issue | 2024-03-25T15:54:13 | 2025-04-01T06:45:09.533632 | {
"authors": [
"vivekkhandelwal1"
],
"repo": "nod-ai/SHARK-Turbine",
"url": "https://github.com/nod-ai/SHARK-Turbine/issues/549",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
187724307 | PushKit support
This wonderful node-apn should support PushKit very soon.
Would this be related to the error "UNABLE_TO_GET_ISSUER_CERT_LOCALLY" ?
I tried updating from 1.7.6 to 2.1.2 and I notice the voip option has been removed from the new apn.Provider (previously apn.Connection) and my previously working project now throws that error at me when I try to send a notification.
@Slessi I don't think this is related. A 4KB payload was only supported for VoIP notifications. But now all notifications can have a payload of up to 4KB. So when the voip flag was set, we increased the max payload size to 4KB.
@merajmehrabi What is required to support PushKit?
@merajmehrabi I got it working with pushkit just fine. There were two things I had to do.
append the notification.topic with ".voip"
(i.e. from "com.example.app" to "com.example.app.voip")
use the token returned from pushkit as the device token.
Thank you for the answer @erva- I just spent about 4 hours pulling my hair out wondering why pushkit wasn't working through using node-apn but I could send them fine with the houston CLI. I added .voip to the topic and it worked no problem!
@merajmehrabi Thanks for the answer mate 👍
| gharchive/issue | 2016-11-07T14:25:07 | 2025-04-01T06:45:09.548241 | {
"authors": [
"Slessi",
"astanton",
"erva-",
"florianreinhart",
"merajmehrabi",
"nickmendes"
],
"repo": "node-apn/node-apn",
"url": "https://github.com/node-apn/node-apn/issues/464",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
1080347876 | D03: Output lists package versions but not package names
The output of D03 - checking for outdated dependencies - includes an array of package versions, but no information on which packages they relate to. That makes it hard to know what to do next.
node_1 | {
node_1 | test: false,
node_1 | packages: [
node_1 | '^0.6.4', '~4.3.2',
node_1 | '~2.3.0', '^3.5.17',
node_1 | '^1.4.0', '^4.1.1',
node_1 | '~1.4.3', '^3.13.1'
node_1 | ]
node_1 | }
fixed in https://github.com/node-red/node-red-dev-cli/commit/6066c840609d1a90c873c5ce472117fa0b1df226
@knolleary can you close this one, I can't.
| gharchive/issue | 2021-12-14T22:15:23 | 2025-04-01T06:45:09.563565 | {
"authors": [
"knolleary",
"sammachin"
],
"repo": "node-red/node-red-dev-cli",
"url": "https://github.com/node-red/node-red-dev-cli/issues/5",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
33726958 | send a video/image file in xmpp
Hi all,
Plz somebody help me i am working on xmpp base chat application i want to send a video as well as image like other application how can i do this ihave a code which is sending image only as a file. is this also work for video ? and what is max size of video.
2nd things is how can i receive this and image/video and show this.
Thanks in Adv
Keyideas
@lloydwatkin : The link is broken. Can you provide an updated link?
@sonnyp ?
@lloydwatkin ?
| gharchive/issue | 2014-05-17T09:09:49 | 2025-04-01T06:45:09.583384 | {
"authors": [
"KeyideasGlobal",
"karanbalkar",
"lloydwatkin",
"sonnyp"
],
"repo": "node-xmpp/node-xmpp",
"url": "https://github.com/node-xmpp/node-xmpp/issues/247",
"license": "isc",
"license_type": "permissive",
"license_source": "bigquery"
} |
268615362 | Should we work on making CommComm repos Code + Learn friendly?
There's been talk over the last year about opening up the Code + Learn program to repos other than Node.js core. The real limitation is people resources–mentors from a repo able to provide clear onboarding/contributing/collaborating guidelines , good-first-pr issues, and availability for reviewing PRs.
Code + Learn helps gain contributors long-term. It would be great to invest in this for our work as well.
@hackygolucky This seems like a great idea.
From what I've observed, the #1 need in the CommComm right now is active collaborators. Given that need, I would propose that primary focus should be on recruiting active members.
@hackygolucky What are the steps needed to help enable this? 🤔
Getting a couple of our issues around documenting onboarding and contributing PRed and walking through getting started with a willing participant to see what gaps we still have that make it too difficult.
Scheduling some admin time on a weekly basis to make sure that issues are appropriately labeled, including good-first-contribution.
Watch for the 2018 Code + Learn schedule and start coordinating if any CommComm members will be there and would like to mentor. This also requires that we keep up with the good-first-contribution labels.
Added "good first issue" to this, since it would probably be helpful if someone who wanted to start helping out on managing it... meta, I know 😅
Happy to help mentor if needed as well.
Bump on this thread ❤️ @bnb, @hackygolucky, think this is still worth pursuing? I know Website Redesign would like to do an educational session of sorts – may be a good candidate as a C+L supplement.
Pinging this again. Discussed in today's CommComm meeting since there's a lack of tests for Code & Learn contributors to contribute ❤️
I think it's certainly still worth doing, and I laid out the firs steps that we should do to start the work and begin to maintain it, but I can't volunteer myself to do that work. This is parking lot/backlog unless someone else can pick it up 😬
I've unarchived this repo so I can close all PRs and issues before re-archiving.
| gharchive/issue | 2017-10-26T03:09:09 | 2025-04-01T06:45:09.674406 | {
"authors": [
"Trott",
"amiller-gh",
"bnb",
"dshaw",
"hackygolucky"
],
"repo": "nodejs/community-committee",
"url": "https://github.com/nodejs/community-committee/issues/158",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
378934701 | Improve edge case error
I've been thinking about the circular edge case error, and that when it happens it might be very cryptic to debug and know how to resolve it.
The other thing here is that a user might have a valid usage in this edge case that doesn't hit on undefined behaviour, but the error is being overly cautious so that they would still not be able to write valid code like:
a.mjs
import './b.mjs';
export * from 'dynamic';
b.mjs
import * as M from './a.mjs';
export function calledLater () {
M.x();
}
Where they may not be accessing the exports directly until later when they would be defined.
Instead users would get a Dynamic namespace unexecuted in cycle error, pointing at the namespace, and just be very confused.
In the name of improving this, I thought - why not just allow the namespace to be available, but just an empty object in this interim phase. Initially I wasn't sure this was possible but in writing the v8 implementation it turned out to be quite straightforward.
This PR updates dynamic modules to instead of throwing an error on this case to simply provide an empty object for the namespace, that only affects these circular edge cases.
I think in practical usage, this will be much more user-friendly actually, despite what might seem like a slight inconsistency.
Feedback welcome!
The corresponding ECMA-262 change for this is at https://github.com/tc39/ecma262/compare/master...guybedford:dynamic-module-changes-2
This change SGTM based on the README; I haven't reviewed all the spec text.
Thanks @bmeck for the review. I'm going through the process now to update the master branches for the ECMA-262 spec, this spec and the v8 implementation. Further feedback on the approach always welcome.
| gharchive/pull-request | 2018-11-08T21:59:14 | 2025-04-01T06:45:09.696928 | {
"authors": [
"guybedford",
"littledan"
],
"repo": "nodejs/dynamic-modules",
"url": "https://github.com/nodejs/dynamic-modules/pull/10",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
787774198 | compile_commands_json generator ignores output directory
Node Version: v14.15.4, npm 6.14.10
Platform: Linux 5.4.0-62-generic #70-Ubuntu SMP Tue Jan 12 12:45:47 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux
Compiler: gcc version 9.3.0 (Ubuntu 9.3.0-17ubuntu1~20.04)
Module: Local module
Steps to reproduce
Generate compile_commands.json:
$ node-gyp configure -- -f compile_commands_json
Expected behavior
The command above produces two files Debug/compile_commands.json and Release/compile_commands.json in the build directory like make generator:
$ node-gyp configure
Actual behavior
The command above produces two files Debug/compile_commands.json and Release/compile_commands.json in the current directory:
Verbose output:
$ ls -1
binding.gyp
node_modules
package-lock.json
test.cc
test.js
$ node_modules/.bin/node-gyp configure --verbose -- -f compile_commands_json
gyp info it worked if it ends with ok
gyp verb cli [
gyp verb cli '/home/kostya/.nvm/versions/node/v14.15.4/bin/node',
gyp verb cli '/home/kostya/tmp/napi-test/node_modules/.bin/node-gyp',
gyp verb cli 'configure',
gyp verb cli '--verbose',
gyp verb cli '--',
gyp verb cli '-f',
gyp verb cli 'compile_commands_json'
gyp verb cli ]
gyp info using node-gyp@7.1.2
gyp info using node@14.15.4 | linux | x64
gyp verb command configure [ '-f', 'compile_commands_json' ]
gyp verb find Python Python is not set from command line or npm configuration
gyp verb find Python Python is not set from environment variable PYTHON
gyp verb find Python checking if "python3" can be used
gyp verb find Python - executing "python3" to get executable path
gyp verb find Python - executable path is "/usr/bin/python3"
gyp verb find Python - executing "/usr/bin/python3" to get version
gyp verb find Python - version is "3.8.5"
gyp info find Python using Python version 3.8.5 found at "/usr/bin/python3"
gyp verb get node dir no --target version specified, falling back to host node version: 14.15.4
gyp verb command install [ '14.15.4' ]
gyp verb install input version string "14.15.4"
gyp verb install installing version: 14.15.4
gyp verb install --ensure was passed, so won't reinstall if already installed
gyp verb install version is already installed, need to check "installVersion"
gyp verb got "installVersion" 9
gyp verb needs "installVersion" 9
gyp verb install version is good
gyp verb get node dir target node version installed: 14.15.4
gyp verb build dir attempting to create "build" dir: /home/kostya/tmp/napi-test/build
gyp verb build dir "build" dir needed to be created? /home/kostya/tmp/napi-test/build
gyp verb build/config.gypi creating config file
gyp verb build/config.gypi writing out config file: /home/kostya/tmp/napi-test/build/config.gypi
gyp verb config.gypi checking for gypi file: /home/kostya/tmp/napi-test/config.gypi
gyp verb common.gypi checking for gypi file: /home/kostya/tmp/napi-test/common.gypi
gyp info spawn /usr/bin/python3
gyp info spawn args [
gyp info spawn args '/home/kostya/tmp/napi-test/node_modules/node-gyp/gyp/gyp_main.py',
gyp info spawn args 'binding.gyp',
gyp info spawn args '-f',
gyp info spawn args 'compile_commands_json',
gyp info spawn args '-I',
gyp info spawn args '/home/kostya/tmp/napi-test/build/config.gypi',
gyp info spawn args '-I',
gyp info spawn args '/home/kostya/tmp/napi-test/node_modules/node-gyp/addon.gypi',
gyp info spawn args '-I',
gyp info spawn args '/home/kostya/.cache/node-gyp/14.15.4/include/node/common.gypi',
gyp info spawn args '-Dlibrary=shared_library',
gyp info spawn args '-Dvisibility=default',
gyp info spawn args '-Dnode_root_dir=/home/kostya/.cache/node-gyp/14.15.4',
gyp info spawn args '-Dnode_gyp_dir=/home/kostya/tmp/napi-test/node_modules/node-gyp',
gyp info spawn args '-Dnode_lib_file=/home/kostya/.cache/node-gyp/14.15.4/<(target_arch)/node.lib',
gyp info spawn args '-Dmodule_root_dir=/home/kostya/tmp/napi-test',
gyp info spawn args '-Dnode_engine=v8',
gyp info spawn args '--depth=.',
gyp info spawn args '--no-parallel',
gyp info spawn args '--generator-output',
gyp info spawn args 'build',
gyp info spawn args '-Goutput_dir=.'
gyp info spawn args ]
gyp info ok
$ ls -1
binding.gyp
build
Debug
node_modules
package-lock.json
Release
test.cc
test.js
Any thoughts? Also noticed this was a bit broken
| gharchive/issue | 2021-01-17T19:13:58 | 2025-04-01T06:45:09.712056 | {
"authors": [
"alexweej",
"ikokostya"
],
"repo": "nodejs/node-gyp",
"url": "https://github.com/nodejs/node-gyp/issues/2305",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
936360653 | Test Issue created at Sun, 04 Jul 2021 02:15:38 GMT
Test issue body Sun, 04 Jul 2021 02:15:38 GMT
Comment on issue at Sun, 04 Jul 2021 02:15:39 GMT
| gharchive/issue | 2021-07-04T02:15:38 | 2025-04-01T06:45:09.998994 | {
"authors": [
"nodemationqa"
],
"repo": "nodemationqa/nodeQA",
"url": "https://github.com/nodemationqa/nodeQA/issues/234",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
796242593 | recommended way to install the latest stable version?
What is the nodenv equivalent of nvm install stable?
At present this isn't possible. Due to the way ruby-build (and therefore node-build) create versions, the definition file becomes the name of the installed version. Which would mean that a build definition named stable would then create a node named stable (which wouldn't be matched by a .node-version file that specifies a version number nor would it be automatically updated so even the name 'stable' would be a lie once a new release drops). There are a host of other complications which is why this hasn't been completed yet; but there is a tracking issue https://github.com/nodenv/node-build/issues/145
| gharchive/issue | 2021-01-28T18:21:09 | 2025-04-01T06:45:10.005592 | {
"authors": [
"erjoalgo",
"jasonkarns"
],
"repo": "nodenv/nodenv",
"url": "https://github.com/nodenv/nodenv/issues/171",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
1584505238 | fix: add process.exit(0) in ON_DEATH hook
Ensure quick shutdown when browsers are holding open a connection.
Fixes: https://github.com/nodeshift/faas-js-runtime/issues/120
Thanks @helio-frota !
| gharchive/pull-request | 2023-02-14T16:52:34 | 2025-04-01T06:45:10.007069 | {
"authors": [
"helio-frota",
"lance"
],
"repo": "nodeshift/faas-js-runtime",
"url": "https://github.com/nodeshift/faas-js-runtime/pull/172",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1560821371 | Reduce the prefixes
Using certain prefixes such as ellipses and question marks may disrupt normal chat flow in a WhatsApp group. These characters are often used in casual conversation to indicate a trailing off or a question, and using them as prefixes to call a bot may confuse or distract users.
It is also important to consider that using too many variations of prefixes may also be overwhelming for users and make it difficult for them to remember which prefix to use. It would be better to keep the prefixes simple, easy to remember and easy to type. Make sure the prefixes are not commonly used in a group chat and easy to distinguish from normal conversation.
Thanks for the PR. And your reason makes sense.
| gharchive/pull-request | 2023-01-28T11:13:27 | 2025-04-01T06:45:10.030500 | {
"authors": [
"niitamer",
"noelzappy"
],
"repo": "noelzappy/chatgpt-whatsapp",
"url": "https://github.com/noelzappy/chatgpt-whatsapp/pull/3",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1460712120 | Nargo WASM compilation failure --- macOS 12.6 Monterey
Description
Following instructions on https://noir-lang.github.io/book/getting_started/nargo/installation.html with Option 1: WASM Executable Backend.
Aim
Install Nargo.
Expected behavior
Compile to succeed.
Bug
For option 1: I set
aztec_backend = { optional = true, git = "https://github.com/noir-lang/aztec_backend", features = ["wasm-base"] , default-features = false }
as described in the documentation.
Running
cargo install --locked --path=.
error[E0599]: no method named `uint8view` found for reference `&wasmer::Memory` in the current scope
--> /Users/michaelneuder/.cargo/git/checkouts/aztec_backend-a697fb631cbad807/8dc5b28/src/barretenberg_wasm/mod.rs:51:43
|
51 | for (byte_id, cell) in memory.uint8view()[offset..(offset + arr.len())]
| ^^^^^^^^^ method not found in `&wasmer::Memory`
error[E0061]: this function takes 1 argument but 0 arguments were supplied
--> /Users/michaelneuder/.cargo/git/checkouts/aztec_backend-a697fb631cbad807/8dc5b28/src/barretenberg_wasm/mod.rs:66:23
|
66 | return memory.view()[start as usize..end]
| ^^^^-- an argument of type `&_` is missing
|
note: associated function defined here
--> /Users/michaelneuder/.cargo/registry/src/github.com-1ecc6299db9ec823/wasmer-3.0.0/src/sys/externals/memory.rs:88:12
|
.
.
.
--> /Users/michaelneuder/.cargo/registry/src/github.com-1ecc6299db9ec823/wasmer-3.0.0/src/sys/instance.rs:113:12
|
113 | pub fn new(
| ^^^
help: provide the argument
|
217 | (Instance::new(/* value */, &module, &res_import).unwrap(), memory)
| ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Some errors have detailed explanations: E0061, E0308, E0599, E0608.
For more information about an error, try `rustc --explain E0061`.
error: could not compile `aztec_backend` due to 20 previous errors
To reproduce
(Describe the steps to reproduce the behavior.)
git clone git@github.com:noir-lang/noir.git
cd noir/crates/nargo
$ cargo install --locked --path=.
Environment
OS: macOS 12.6
Versions:
rustc --version
rustc 1.64.0 (a55dd71d5 2022-09-19)
cmake --version
cmake version 3.25.0
clang --version
Apple clang version 14.0.0 (clang-1400.0.29.202)
For nargo users
noir-lang/noir commit cloned:
Proving backend
[ ] default
Clang: (run clang --version)
[x] wasm-base
Note that I also tried https://noir-lang.github.io/book/getting_started/nargo/installation.html#option-2-compile-backend-from-source and changed
aztec_backend = { optional = true, git = "https://github.com/noir-lang/aztec_backend", rev = "01b922adcb5a9d70b2d12304e1cb7487d9f28188" }
Running
bash -c "RUST_BACKTRACE=FULL cargo install --locked --path=."
2 errors generated.
make[2]: *** [src/aztec/rollup/proofs/CMakeFiles/rollup_proofs_objects.dir/account/c_bind.cpp.o] Error 1
make[1]: *** [src/aztec/rollup/proofs/CMakeFiles/rollup_proofs_objects.dir/all] Error 2
2 errors generated.
make[2]: *** [src/aztec/rollup/proofs/CMakeFiles/rollup_proofs_test_objects.dir/account/account_tx.test.cpp.o] Error 1
2 errors generated.
make[2]: *** [src/aztec/stdlib/merkle_tree/CMakeFiles/stdlib_merkle_tree_test_objects.dir/memory_tree.test.cpp.o] Error 1
make[1]: *** [src/aztec/stdlib/merkle_tree/CMakeFiles/stdlib_merkle_tree_test_objects.dir/all] Error 2
2 errors generated.
2 errors generated.
make[2]: *** [src/aztec/rollup/proofs/CMakeFiles/rollup_proofs_test_objects.dir/escape_hatch/escape_hatch_tx.test.cpp.o] Error 1
make[2]: *** [src/aztec/rollup/proofs/CMakeFiles/rollup_proofs_test_objects.dir/escape_hatch/escape_hatch.test.cpp.o] Error 1
make[1]: *** [src/aztec/rollup/proofs/CMakeFiles/rollup_proofs_test_objects.dir/all] Error 2
make: *** [all] Error 2
thread 'main' panicked at '
command did not execute successfully, got: exit status: 2
build script failed, must exit now', /Users/michaelneuder/.cargo/registry/src/github.com-1ecc6299db9ec823/cmake-0.1.49/src/lib.rs:1104:5
stack backtrace:
0: rust_begin_unwind
at /rustc/a55dd71d5fb0ec5a6a3a9e8c27b2127ba491ce52/library/std/src/panicking.rs:584:5
1: core::panicking::panic_fmt
at /rustc/a55dd71d5fb0ec5a6a3a9e8c27b2127ba491ce52/library/core/src/panicking.rs:142:14
2: cmake::fail
3: cmake::run
4: cmake::Config::build
5: build_script_build::main
6: core::ops::function::FnOnce::call_once
note: Some details are omitted, run with `RUST_BACKTRACE=full` for a verbose backtrace.
error: failed to compile `nargo v0.1.0 (/Users/michaelneuder/github/noir/crates/nargo)`, intermediate artifacts can be found at `/Users/michaelneuder/github/noir/target
So it seems like neither installation method is working for my machine
@michaelneuder Thanks for the report, build is currently broken
See this pre-built binary nargo-x86_64-apple-darwin.
thanks, this is working!
| gharchive/issue | 2022-11-22T23:43:21 | 2025-04-01T06:45:10.041846 | {
"authors": [
"kobyhallx",
"michaelneuder"
],
"repo": "noir-lang/noir",
"url": "https://github.com/noir-lang/noir/issues/518",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2705927113 | feat(ssa): Implement missing brillig constraints SSA check
Description
This adds a new check to the security SSA passes module warning the developer about insufficiently constrained brillig calls.
Problem
Brillig function calls not properly constrained by a later assert should be reported as bugs, as they present potential soundness vulnerabilities. This is referenced in and resolves #5425.
Summary
For now, we consider descendant values of every call result value and at least one of the argument values (if there are any) being involved in a set of constraint instructions proper coverage. If a result value itself (and not its descendant) is constrained (against a constant, for example), that too is currently considered proper. To check for it, we collect the descendant value ids for each brillig call met in a function and check them against constraints met later. If any of the brillig calls aren't sufficiently covered, we issue security bug warnings for them.
To illustrate this, the program mentioned in #5425 is also included in test-programs.
Additional Context
This check has been tested on noir-protocol-circuits to measure how it affects compiler performance (employing --benchmark-codegen) and see how many bug occurrences there would be. At the time, the check catches 607 brillig calls there, and the performance hit seems to be negligible. In testing, some false positives have been revealed and addressed, with tests added accordingly.
For some reason, seemingly unrelated to the new check (should happen way before it, and anyway it wouldn't touch a function with no brillig calls) after merging in the latest master the unconditional_recursion test in the inliner fails on CI environment (getting caught with stack overflow before reaching RECURSION_LIMIT) while passing both on mainframe and my dev machine. CI test reruns didn't help. Mitigated this by lowering RECURSION_LIMIT in the inliner for now, please decide if that should stay.
Documentation
Check one:
[x] No documentation needed.
[ ] Documentation included in this PR.
[ ] [For Experimental Features] Documentation to be submitted in a separate PR.
PR Checklist
[x] I have tested the changes locally.
[x] I have formatted the changes with Prettier and/or cargo fmt on default settings.
Maybe an illustration like this could be helpful at the top of the method:
v1 v2 v3
\ / \ /
\ / \ /
v4 v5 = call(v2, v3)
|\ |
| \ |
| \ |
| \ |
| \ |
| v6 = call(v5, v4)
| /
| /
| /
| /
constrain(v4, v6)
With an explanation that we're looking for the ancestors or the constraint to intersect both the i) descendants (results) and the ii) ancestors (arguments) of the Brillig calls.
| gharchive/pull-request | 2024-11-29T18:51:23 | 2025-04-01T06:45:10.048791 | {
"authors": [
"aakoshh",
"rkarabut"
],
"repo": "noir-lang/noir",
"url": "https://github.com/noir-lang/noir/pull/6658",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
76772053 | Email with name
Hello! This is really tiny validator, but it will be great to pass validation for emails with names like
Sender Name <mylovelymail@gooddomain.com>
Thanks for the suggestion. I think this would be better done as a wrapper around the library rather than building it in.
| gharchive/issue | 2015-05-15T16:00:53 | 2025-04-01T06:45:10.052877 | {
"authors": [
"SilverFire",
"nojacko"
],
"repo": "nojacko/email-validator",
"url": "https://github.com/nojacko/email-validator/issues/2",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
168694597 | Doesn't return stream for large databases
Here's my app.js in node:
var express = require("express");
var repStream = require('express-pouchdb-replication-stream');
var app = express();
app.use(function(req, res, next) {
res.header("Access-Control-Allow-Origin", "http://localhost:4200");
res.header("Access-Control-Allow-Credentials", true);
res.header("Access-Control-Allow-Headers", "Origin, X-Requested-With, Content-Type, Accept");
next();
});
app.get("/",function(req,res){
res.send("<h1>Hello from EC2</h1>");
});
app.use('/api/couchdb/:db', repStream({
url : 'http://sync:4theg00doftheherd@54.149.99.123:5984/',
dbReq : true,
replication: {batch_size: 100}
}));
app.listen(80);
"app.js" 22L, 650C
It works fine for small databases, but when I request a larger CouchDB database (20MB+) nothing gets returned. Is there some sort of setting that I need to turn on? Thanks
if your on amazon there is a destinct change of buffering being an issue
On Mon, Aug 1, 2016 at 1:06 PM Taimur Abdaal notifications@github.com
wrote:
Here's my app.js in node:
var express = require("express");
var repStream = require('express-pouchdb-replication-stream');
var app = express();
app.use(function(req, res, next) {
res.header("Access-Control-Allow-Origin", "http://localhost:4200");
res.header("Access-Control-Allow-Credentials", true);
res.header("Access-Control-Allow-Headers", "Origin, X-Requested-With, Content-Type, Accept");
next();
});
app.get("/",function(req,res){
res.send("Hello from EC2");
});
app.use('/api/couchdb/:db', repStream({
url : 'http://sync:4theg00doftheherd@54.149.99.123:5984/',
dbReq : true,
replication: {batch_size: 100}
}));
app.listen(80);
"app.js" 22L, 650C
It works fine for small databases, but when I request a larger CouchDB
database (20MB+) nothing gets returned. Is there some sort of setting that
I need to turn on? Thanks
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
https://github.com/nolanlawson/pouchdb-replication-stream/issues/54, or mute
the thread
https://github.com/notifications/unsubscribe-auth/ABE4nzduogwvwf1VVdH2bQeqaAcJM8hZks5qbiekgaJpZM4JZyuH
.
Thanks @calvinmetcalf - I am indeed on AWS (EC2), any ideas how to fix it? Or should I try hosting somewhere else?
you probably have nginx or something setup
@calvinmetcalf is there something I can configure in nginx to fix it?
yes I believe you should be turning off 'proxy_buffering'. There is something called 'server sent events' which has similar issues so if you google for guides to getting server sent events to work with nginx you'll find the instructions you need
| gharchive/issue | 2016-08-01T17:06:43 | 2025-04-01T06:45:10.075689 | {
"authors": [
"calvinmetcalf",
"refrigerator"
],
"repo": "nolanlawson/pouchdb-replication-stream",
"url": "https://github.com/nolanlawson/pouchdb-replication-stream/issues/54",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
270525132 | Two Way Control with Vertical thumbnails
Hi is this Vertical slide possible?
https://jsfiddle.net/q827wb0a/244/
Thanks.
Do you want to ask a question? Are you looking for support? The Swiper forum and Stack Overflow are the best places for getting support
Please, don't use GitHub issues for questions
| gharchive/issue | 2017-11-02T03:56:39 | 2025-04-01T06:45:10.078028 | {
"authors": [
"Uranbold",
"nolimits4web"
],
"repo": "nolimits4web/Swiper",
"url": "https://github.com/nolimits4web/Swiper/issues/2303",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
1508232461 | Xps datamodel v new
@sanbrock and @domna, Here is the new pull request instead of #58. I have tried my best to follow @domna comments for the last pull request #58.
LGTM
| gharchive/pull-request | 2022-12-22T17:02:13 | 2025-04-01T06:45:10.079109 | {
"authors": [
"RubelMozumder",
"sanbrock"
],
"repo": "nomad-coe/nomad-parser-nexus",
"url": "https://github.com/nomad-coe/nomad-parser-nexus/pull/61",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
523950518 | Purpose of pending_writes?
Is there any reason for having Link::Modified? Apart from enforcing some invariants like preventing modified trees from being pruned, I suppose removing Link::Modified would save a bit of ops/sec.
Apart from that, I am also curious if there is a need for pending_writes in Link::Modified.
Hey @iwasaki-kenta, thanks for your interest in merk.
Maybe it should be better documented but it's there because applying changes to the tree actually takes 2 passes: a first pass where we just change the tree structure (marking some links as Modified, where their hash has not been computed yet) and a second pass where we recompute hashes and write the changes to RocksDB.
I went with this strategy because sometimes we traverse through subtrees more than once, for instance when we delete a node and all of its ancestors now need to be re-hashed, so if we computed hashes as we traversed then sometimes we would rehash a node multiple times in one batch.
Apart from that, I am also curious if there is a need for pending_writes in Link::Modified.
This isn't currently used, but it is there for the future addition of concurrent ops, where we will be able to use pending_writes as a heuristic for evenly breaking up the work among workers for maximum CPU utilization.
Gotcha - thanks a lot for the insight! I'll close this off now.
| gharchive/issue | 2019-11-17T07:45:36 | 2025-04-01T06:45:10.105478 | {
"authors": [
"iwasaki-kenta",
"mappum"
],
"repo": "nomic-io/merk",
"url": "https://github.com/nomic-io/merk/issues/29",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
413747653 | fix some code to compile on latest 2.13 nightly
this change should be fine on other Scala versions too. Flags are
Longs, always have been, the compiler is just pickier now about
coercing Longs to Ints
to ensure that this is mergeable, I tested the combination of this and #94 locally with sbt +test. but it would really be better if #93 were fixed
Thanks, @SethTisue!
| gharchive/pull-request | 2019-02-23T22:12:01 | 2025-04-01T06:45:10.123328 | {
"authors": [
"SethTisue",
"TomasMikula"
],
"repo": "non/kind-projector",
"url": "https://github.com/non/kind-projector/pull/92",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
864311009 | IndexError: list index out of range
Dear,
I'm trying to run the example but it fails.
sherlock@dt-minion-1:~$ sudo python3 MCP2221_i2cdetect.py
--------------------------------------------------
MCP2221(A) i2cdetect
--------------------------------------------------
Reset
Traceback (most recent call last):
File "MCP2221_i2cdetect.py", line 13, in <module>
mcp2221 = PyMCP2221A.PyMCP2221A()
File "/usr/local/lib/python3.8/dist-packages/PyMCP2221A/PyMCP2221A.py", line 16, in __init__
self.mcp2221a.open_path(hid.enumerate(VID, PID)[devnum]["path"])
IndexError: list index out of range
It gets stuck after printing RESET for quite a long time. and then crashes
I can see the device (adafruit board) is connected to the USB.
sherlock@dt-minion-1:~$ sudo lsusb
Bus 002 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub
Bus 001 Device 007: ID 8087:0aaa Intel Corp. Bluetooth 9460/9560 Jefferson Peak (JfP)
Bus 001 Device 006: ID 2c7c:0800 Quectel Wireless Solutions Co., Ltd. RM500Q-GL
Bus 001 Device 005: ID 2886:8027 Seeed Technology Co., Ltd. Seeeduino_Cortex_M0+
Bus 001 Device 020: ID 04d8:00dd Microchip Technology, Inc. MCP2221 USB-I2C/UART Combo
Bus 001 Device 004: ID 0bda:0129 Realtek Semiconductor Corp. RTS5129 Card Reader Controller
Bus 001 Device 003: ID 0bda:c811 Realtek Semiconductor Corp. 802.11ac NIC
Bus 001 Device 002: ID 16d0:063d MCS Master Brick
Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub
is that incompatibility with python3?
I found that post that might be related to the issue https://github.com/trezor/cython-hidapi/issues/86
My output from enumerate is the following:
>>> hid.enumerate(0x04d8,0x00dd)
[{'path': b'0001:0016:02', 'vendor_id': 1240, 'product_id': 221, 'serial_number': '', 'release_number': 256, 'manufacturer_string': 'Microchip Technology Inc.', 'product_string': 'MCP2221 USB-I2C/UART Combo', 'usage_page': 0, 'usage': 0, 'interface_number': 2}]
>>> d = hid.device()
>>> d.open_path( b'0001:0016:02')
>>>
So i accually can open path to device
and if I hard code device path into the PyMCP2221A library itself AND will commnet out mcp2221.Reset()
It will run! but mcp2221.Reset() changes the usb path every time so that is not an option
sherlock@dt-minion-1:~$ sudo python3 mcptest.py
--------------------------------------------------
MCP2221(A) i2cdetect
--------------------------------------------------
Manufacturer: Microchip Technology Inc.
Product: MCP2221 USB-I2C/UART Combo
Serial No: Љ
0 1 2 3 4 5 6 7 8 9 A B C D E F
-- -- -- -- -- -- -- -- -- -- -- 0B -- -- -- -- 0F
-- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- 1F
-- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- 2F
-- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- 3F
-- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- 4F
-- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- 5F
-- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- 6F
-- -- -- -- -- -- -- -- -- -- -- -- -- -- --
What can be causing that?
TIA
Thank you very much.
I haven't had a similar issue, but mcp2221.Reset may have a problem.
mcp2221.Reset is a "Hacks" to release by resetting MCP2221A when the connection to I2C fails and it stops at Read.
It's not working properly as a HID driver, so you might want to comment it out.
It will be removed in future improvements.
Also, please tell me the Python version in detail.
I have certainly not tried the new version.
I didn't notice that Python had reached 3.9.
It's changing faster than it was in the Python2 era.
I slept with the problem and now it is obvious that the problem is a Reset() function on my setup. It just crashes the the board.
I use Ubuntu 20.10
Python 3.8.6 (default, Jan 27 2021, 15:42:20)
And that little board from Adafruit MCP2221A Breakout connected over USB-C
https://www.adafruit.com/product/4471
Without Reset() it works great and I have not seen a crash yet.
| gharchive/issue | 2021-04-21T21:15:43 | 2025-04-01T06:45:10.129844 | {
"authors": [
"nonNoise",
"xsherlockpl"
],
"repo": "nonNoise/PyMCP2221A",
"url": "https://github.com/nonNoise/PyMCP2221A/issues/17",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
852568358 | Error when accessing a file from cached namespace bucket
Environment info
NooBaa Version: 5.6.0
Platform: Kubernetes 1.18
Actual behavior
Create cached namespace bucket
Able to list the files of the remote bucket
Not able to copy a file
Expected behavior
Create cached namespace bucket
Able to list the files of the remote bucket
Able to copy a file
More information - Screenshots / Logs / Other output
The main error reported on the endpoint pod is:
Apr-7 8:26:42.267 [Endpoint/14] [ERROR] core.rpc.rpc:: RPC._request: response ERROR srv object_api.update_endpoint_stats param
s { namespace_stats: [ { io_stats: { read_count: 607, write_count: 0, read_bytes: 554063233, write_bytes: 0, error_write_bytes: 0, error_write_count: 0, error_read_bytes: 0, error_read_count: 0 }, namespace_resource_id: '60658c3da2a804002ee6026b' }, [length]: 1 ], bucket_counters: [ { bucket_name: SENSITIVE-64a717a11f8d324e, content_type: 'binary/octet-stream', read_count: 700, write_count: 0 }, { bucket_name: SENSITIVE-64a717a11f8d324e, content_type: 'application/octet-stream', read_count: 4, write_count: 0 }, [length]: 2 ] } reqid 34499@fcall://fcall(7psnzeet) took [1.1+1.1=2.2] Error: not anonymous method update_endpoint_stats
Attaching more verbose logging as well
noobaa.log
Hey @YiannisGkoufas I think that these errors on update_endpoint_stats should not interrupt the S3 service.
Do you see that the read fails? Because I see in your log that it might have completed?
Read request log
Apr-7 8:25:42.451 [Endpoint/14] [L0] core.endpoint.s3.s3_rest:: S3 REQUEST
GET /bucket1-cached/TPCDS-TEST-100G/call_center/part-00000-1f4a5e91-eac5-4190-8499-65d95212f848-c000.snappy.parquet
op get_object request_id kn76sf1t-8tcokv-gsr
{
host: 's3.noobaa.svc',
'accept-encoding': 'identity',
'user-agent': 'aws-cli/2.1.35 Python/3.8.8 Linux/4.15.0-140-generic exe/x86_64.debian.10 prompt/off command/s3.cp',
'x-amz-date': '20210407T082542Z',
'x-amz-content-sha256': 'e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855',
authorization: 'AWS4-HMAC-SHA256 Credential=fOXTchSSFBmsUIHyLnY9/20210407/us-east-1/s3/aws4_request, SignedHeaders=host;x-amz-content-sha256;x-amz-date, Signature=036b7375ceaef24cc0df8a646a456c4bb398398a16330e0623081582d260b3a6'
}
Apr-7 8:25:42.452 [Endpoint/14] [L0] core.server.object_services.object_server:: object_server.read_object_md:
{
bucket: SENSITIVE-64a717a11f8d324e,
key: 'TPCDS-TEST-100G/call_center/part-00000-1f4a5e91-eac5-4190-8499-65d95212f848-c000.snappy.parquet'
}
Apr-7 8:25:42.455 [Endpoint/14] [L0] core.sdk.namespace_cache:: NamespaceCache.read_object_md use md from cache
{
obj_id: '606c85f72c09c0000e78e8fd',
bucket: SENSITIVE-64a717a11f8d324e,
key: 'TPCDS-TEST-100G/call_center/part-00000-1f4a5e91-eac5-4190-8499-65d95212f848-c000.snappy.parquet',
size: 13964,
etag: 'a2a7f60ad6165b1f9d621094c8ddd069',
md5_b64: 'oqf2CtYWWx+dYhCUyN3QaQ==',
content_type: 'binary/octet-stream',
create_time: 1617724919898,
last_modified_time: 1616334864000,
cache_last_valid_time: 1617783942436,
num_parts: 1,
is_latest: true,
xattr: { 'noobaa-namespace-s3-bucket': 'yiannis-tpcds-100' },
stats: { reads: 0, last_read: 0 },
tag_count: 0,
should_read_from_cache: true
}
Apr-7 8:25:42.456 [Endpoint/14] [L0] core.sdk.namespace_cache:: NamespaceCache._read_object_stream
{
params: {
object_md: {
obj_id: '606c85f72c09c0000e78e8fd',
bucket: SENSITIVE-64a717a11f8d324e,
key: 'TPCDS-TEST-100G/call_center/part-00000-1f4a5e91-eac5-4190-8499-65d95212f848-c000.snappy.parquet',
size: 13964,
etag: 'a2a7f60ad6165b1f9d621094c8ddd069',
md5_b64: 'oqf2CtYWWx+dYhCUyN3QaQ==',
content_type: 'binary/octet-stream',
create_time: 1617724919898,
last_modified_time: 1616334864000,
cache_last_valid_time: 1617783942436,
num_parts: 1,
is_latest: true,
xattr: { 'noobaa-namespace-s3-bucket': 'yiannis-tpcds-100' },
stats: { reads: 0, last_read: 0 },
tag_count: 0,
should_read_from_cache: true
},
obj_id: '606c85f72c09c0000e78e8fd',
bucket: 'bucket1-cached',
key: 'TPCDS-TEST-100G/call_center/part-00000-1f4a5e91-eac5-4190-8499-65d95212f848-c000.snappy.parquet',
content_type: 'binary/octet-stream',
noobaa_trigger_agent: false,
md_conditions: undefined,
encryption: undefined,
read_size: 13964
}
}
Apr-7 8:25:42.456 [Endpoint/14] [L0] core.sdk.namespace_cache:: NamespaceCache: bucket usage
{
bucket_free_space_bytes: 3856416768,
bucket_usage: {
size: 15201595,
size_reduced: 516854505,
free: 4820520960,
available_for_upload: 4820520960,
last_update: 1617783828573
}
}
Apr-7 8:25:42.457 [Endpoint/14] [L0] core.util.http_utils:: HTTP REPLY RAW
GET /bucket1-cached/TPCDS-TEST-100G/call_center/part-00000-1f4a5e91-eac5-4190-8499-65d95212f848-c000.snappy.parquet
https://github.com/noobaa/noobaa-core/issues/6441
Hi @guymguym yeah the file shows up but with 0 size
I noticed this error on the endpoint pod:
Apr-21 11:15:24.492 [Endpoint/11] [L0] core.endpoint.s3.ops.s3_get_object:: request aborted: undefined
(node:11) UnhandledPromiseRejectionWarning: Error: Semaphore Timeout
at Semaphore._on_timeout (/root/node_modules/noobaa-core/src/util/semaphore.js:215:25)
at Timeout.<anonymous> (/root/node_modules/noobaa-core/src/util/semaphore.js:211:53)
at listOnTimeout (internal/timers.js:554:17)
at processTimers (internal/timers.js:497:7)
(node:11) UnhandledPromiseRejectionWarning: Unhandled promise rejection. This error originated either by throwing inside of an async function without a catch block, or by rejecting a promise which was not handled with .catch(). To terminate the node process on unhandled promise rejection, use the CLI flag `--unhandled-rejections=strict` (see https://nodejs.org/api/cli.html#cli_unhandled_rejections_mode). (rejection id: 102)
(node:11) UnhandledPromiseRejectionWarning: Error: Semaphore Timeout
at Semaphore._on_timeout (/root/node_modules/noobaa-core/src/util/semaphore.js:215:25)
at Timeout.<anonymous> (/root/node_modules/noobaa-core/src/util/semaphore.js:211:53)
at listOnTimeout (internal/timers.js:554:17)
at processTimers (internal/timers.js:497:7)
(node:11) UnhandledPromiseRejectionWarning: Unhandled promise rejection. This error originated either by throwing inside of an async function without a catch block, or by rejecting a promise which was not handled with .catch(). To terminate the node process on unhandled promise rejection, use the CLI flag `--unhandled-rejections=strict` (see https://nodejs.org/api/cli.html#cli_
@jeniawhite @dannyzaken @nimrod-becker Did you see issues with caching on 5.6?
@YiannisGkoufas What is your hub resource? AWS S3? S3 compatible? Also note that we recently released 5.7 perhaps you would prefer to use it on next runs. Another option is to use weekly master builds - @liranmauda do we have a wiki on weekly master builds so that it will be clear how to find the CLI, and the images etc... I see that @jeniawhite uploaded the last CLI to https://noobaa-operator-cli.s3.amazonaws.com/ but I wonder if the weekly build process pushed there too.
I believe we were not able to bring all the fixes of caching into 5.6, @YiannisGkoufas would it be possible for you to use the 5.7 version ? we have the images in the dockerhub repos
@guymguym @nimrod-becker It's IBM COS, I tried version 5.7 and builds from master with the same behaviour
@liranmauda do we have a wiki on weekly master builds so that it will be clear how to find the CLI, and the images etc...
We Don't have a Wiki for that, AFAIK.
I see that @jeniawhite uploaded the last CLI to https://noobaa-operator-cli.s3.amazonaws.com/ but I wonder if the weekly build process pushed there too.
The weekly build process should push noobaa-operator-cli unless something is not working.
Created wiki for that - https://github.com/noobaa/noobaa-core/wiki/Weekly-Master-Builds
@YiannisGkoufas Can you share the CR's - BackingStore, NamespaceStore, BucketClass?
Sure @guymguym, using the latest images+cli
https://github.com/noobaa/noobaa-operator/commit/bf95bd6e00a8a65e191915bc4b852761b005ad71
https://github.com/noobaa/noobaa-core/commit/4ce639a13908fa898bb31be8d44aac3b97ebd512
I am not able to do even an ls
[yiannis@oc3656886087 ~]$ aws s3 --endpoint-url=https://150.238.203.59:443 --no-verify-ssl ls
/usr/local/aws-cli/v2/2.1.3/dist/urllib3/connectionpool.py:1020: InsecureRequestWarning: Unverified HTTPS request is being made to host '150.238.203.59'. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/latest/advanced-usage.html#ssl-warnings
expected string or bytes-like object
From the CRDs you suggested I only had a backinstore and a bucket class (added the connection and the namespace bucket via the UI)
noobaa_crds.zip
| gharchive/issue | 2021-04-07T16:11:22 | 2025-04-01T06:45:10.146700 | {
"authors": [
"YiannisGkoufas",
"dannyzaken",
"guymguym",
"liranmauda",
"nimrod-becker"
],
"repo": "noobaa/noobaa-core",
"url": "https://github.com/noobaa/noobaa-core/issues/6441",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
933510958 | Endpoint panic with IO and scaling up operation
Environment info
NooBaa Version: master-20210629
Platform: OCP 4.6.16
Actual behavior
Endpoint pod restarted on IO + scaling up operation
Expected behavior
Endpoint pod should not restart with panic
Steps to reproduce
Started IO for multi-size files.
Patched noobaa endpoint to 36 pods
Endpoint panic after few minutes for all endpoints. Many endpoints got stuck with error CreateContainerError
Endpoint panic:
2021-06-30 09:26:50.275486 [PID-10/TID-10] [L1] FS::FSWorker::Begin: Stat _path=/nsfs/nsfs-nsr-1/bucket-11
PANIC: FS::FileWrap::dtor: file not closed _path=/nsfs/nsfs-nsr-1/bucket-11/.noobaa-nsfs_60dc35d00dab1800234855cf/multipart-uploads/26f8a0e4-4d30-43fe-afac-c7d6fbfc7d19/part-2949 _fd=22 Success (0) ~FileWrap() at ../src/native/fs/fs_napi.cpp:625
######################################################################
/noobaa_init_files/noobaa_init.sh: line 74: 10 Aborted $*
Wed Jun 30 09:26:50 UTC 2021 NooBaa: Process exited RIP (RC=134)
######################################################################
oc describe pod o/p:
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 143m default-scheduler Successfully assigned noobaa/noobaa-endpoint-7cb9d79574-ffsrs to worker2.ocp-akshat-1.cp.fyre.ibm.com
Normal AddedInterface 143m multus Add eth0 [10.254.17.115/22]
Normal Created 143m kubelet Created container endpoint
Normal Started 143m kubelet Started container endpoint
Warning Failed 20m kubelet Error: relabel failed /var/mnt/fs1/pvc-93b1a7ad-48a1-4e08-9a83-f20fbafd2768/pvc-93b1a7ad-48a1-4e08-9a83-f20fbafd2768-data: lstat /var/mnt/fs1/pvc-93b1a7ad-48a1-4e08-9a83-f20fbafd2768/pvc-93b1a7ad-48a1-4e08-9a83-f20fbafd2768-data/bucket-11/.noobaa-nsfs_60dc35d00dab1800234855cf/multipart-uploads/26f8a0e4-4d30-43fe-afac-c7d6fbfc7d19/part-1138: no such file or directory
Warning Failed 20m kubelet Error: relabel failed /var/mnt/fs1/pvc-93b1a7ad-48a1-4e08-9a83-f20fbafd2768/pvc-93b1a7ad-48a1-4e08-9a83-f20fbafd2768-data: lstat /var/mnt/fs1/pvc-93b1a7ad-48a1-4e08-9a83-f20fbafd2768/pvc-93b1a7ad-48a1-4e08-9a83-f20fbafd2768-data/bucket-11/.noobaa-nsfs_60dc35d00dab1800234855cf/multipart-uploads/26f8a0e4-4d30-43fe-afac-c7d6fbfc7d19/part-1077: no such file or directory
Warning Failed 19m kubelet Error: relabel failed /var/mnt/fs1/pvc-93b1a7ad-48a1-4e08-9a83-f20fbafd2768/pvc-93b1a7ad-48a1-4e08-9a83-f20fbafd2768-data: lstat /var/mnt/fs1/pvc-93b1a7ad-48a1-4e08-9a83-f20fbafd2768/pvc-93b1a7ad-48a1-4e08-9a83-f20fbafd2768-data/bucket-11/.noobaa-nsfs_60dc35d00dab1800234855cf/uploads/1c6e8527-1b7f-4ae5-8ae4-ff6aa6dc19a5: no such file or directory
Warning Failed 19m kubelet Error: relabel failed /var/mnt/fs1/pvc-93b1a7ad-48a1-4e08-9a83-f20fbafd2768/pvc-93b1a7ad-48a1-4e08-9a83-f20fbafd2768-data: lstat /var/mnt/fs1/pvc-93b1a7ad-48a1-4e08-9a83-f20fbafd2768/pvc-93b1a7ad-48a1-4e08-9a83-f20fbafd2768-data/bucket-11/.noobaa-nsfs_60dc35d00dab1800234855cf/uploads/16dbe1a0-b3e7-452d-b3e0-4d4afabc0a4f: no such file or directory
Warning Failed 18m kubelet Error: relabel failed /var/mnt/fs1/pvc-93b1a7ad-48a1-4e08-9a83-f20fbafd2768/pvc-93b1a7ad-48a1-4e08-9a83-f20fbafd2768-data: lstat /var/mnt/fs1/pvc-93b1a7ad-48a1-4e08-9a83-f20fbafd2768/pvc-93b1a7ad-48a1-4e08-9a83-f20fbafd2768-data/bucket-11/.noobaa-nsfs_60dc35d00dab1800234855cf/uploads/288c8f19-9b82-4af8-9cc1-4b8c7e059ed4: no such file or directory
Warning Failed 17m kubelet Error: relabel failed /var/mnt/fs1/pvc-93b1a7ad-48a1-4e08-9a83-f20fbafd2768/pvc-93b1a7ad-48a1-4e08-9a83-f20fbafd2768-data: lstat /var/mnt/fs1/pvc-93b1a7ad-48a1-4e08-9a83-f20fbafd2768/pvc-93b1a7ad-48a1-4e08-9a83-f20fbafd2768-data/bucket-11/.noobaa-nsfs_60dc35d00dab1800234855cf/uploads/247fa113-269e-4784-80b1-3534502ce54d: no such file or directory
Warning BackOff 15m (x13 over 19m) kubelet Back-off restarting failed container
Normal Pulled 15m (x8 over 143m) kubelet Container image "noobaa/noobaa-core:master-20210629" already present on machine
pod o/p:
[root@ocp-akshat-1-inf test]# podn
NAME READY STATUS RESTARTS AGE
noobaa-core-0 1/1 Running 0 174m
noobaa-db-pg-0 1/1 Running 0 3h52m
noobaa-default-backing-store-noobaa-pod-a55b7b4b 0/1 Terminating 0 7s
noobaa-endpoint-7cb9d79574-28cxn 0/1 CreateContainerError 0 18m
noobaa-endpoint-7cb9d79574-2ddpp 0/1 CreateContainerError 0 18m
noobaa-endpoint-7cb9d79574-2mdkj 0/1 CreateContainerError 0 18m
noobaa-endpoint-7cb9d79574-4f4dp 0/1 CreateContainerError 0 18m
noobaa-endpoint-7cb9d79574-4j2kw 0/1 CreateContainerError 0 18m
noobaa-endpoint-7cb9d79574-6gw4g 0/1 CreateContainerError 0 18m
noobaa-endpoint-7cb9d79574-6qrqk 1/1 Running 1 18m
noobaa-endpoint-7cb9d79574-6tpvn 1/1 Running 1 18m
noobaa-endpoint-7cb9d79574-6xfd6 0/1 CreateContainerError 0 18m
noobaa-endpoint-7cb9d79574-7r2xk 0/1 CreateContainerError 0 18m
noobaa-endpoint-7cb9d79574-85sj5 0/1 CreateContainerError 0 18m
noobaa-endpoint-7cb9d79574-bcgdh 0/1 CreateContainerError 0 18m
noobaa-endpoint-7cb9d79574-dfl8d 0/1 CreateContainerError 0 18m
noobaa-endpoint-7cb9d79574-ffsrs 0/1 CreateContainerError 0 130m
noobaa-endpoint-7cb9d79574-gmcld 1/1 Running 1 18m
noobaa-endpoint-7cb9d79574-hjpbz 0/1 CreateContainerError 0 18m
noobaa-endpoint-7cb9d79574-jbzw6 1/1 Running 1 18m
noobaa-endpoint-7cb9d79574-jswfl 0/1 CreateContainerError 0 18m
noobaa-endpoint-7cb9d79574-lm5bs 0/1 CreateContainerError 0 18m
noobaa-endpoint-7cb9d79574-lr5rg 1/1 Running 1 18m
noobaa-endpoint-7cb9d79574-n8kvk 0/1 CreateContainerError 0 18m
noobaa-endpoint-7cb9d79574-n9ctk 0/1 CreateContainerError 0 18m
noobaa-endpoint-7cb9d79574-ngcwf 1/1 Running 1 18m
noobaa-endpoint-7cb9d79574-nrvjw 1/1 Running 1 18m
noobaa-endpoint-7cb9d79574-pkclt 0/1 CreateContainerError 0 18m
noobaa-endpoint-7cb9d79574-r9xsm 0/1 CreateContainerError 0 18m
noobaa-endpoint-7cb9d79574-s4qhx 0/1 CreateContainerError 0 18m
noobaa-endpoint-7cb9d79574-s7rzp 1/1 Running 1 18m
noobaa-endpoint-7cb9d79574-sdt42 0/1 CreateContainerError 0 18m
noobaa-endpoint-7cb9d79574-svpkq 0/1 CreateContainerError 0 18m
noobaa-endpoint-7cb9d79574-v7br6 0/1 CreateContainerError 0 18m
noobaa-endpoint-7cb9d79574-x4xqf 0/1 CreateContainerError 0 18m
noobaa-endpoint-7cb9d79574-x8pqv 1/1 Running 1 18m
noobaa-endpoint-7cb9d79574-xgr7n 0/1 CreateContainerError 0 18m
noobaa-endpoint-7cb9d79574-xjvsm 0/1 CreateContainerError 0 18m
noobaa-endpoint-7cb9d79574-xld77 0/1 CreateContainerError 0 18m
noobaa-operator-74dc58b6ff-9dp65 1/1 Running 0 3h52m
More information - Screenshots / Logs / Other output
Capturing all logs for previous pods as well
@akmithal as we talked, duping into #6624
| gharchive/issue | 2021-06-30T09:52:33 | 2025-04-01T06:45:10.153367 | {
"authors": [
"akmithal",
"nimrod-becker"
],
"repo": "noobaa/noobaa-core",
"url": "https://github.com/noobaa/noobaa-core/issues/6626",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
237638466 | Horizontal scroll with mouse wheel with shift key
Native scroll (at least on mac, idk windows/etc) allow you to hold shift key with mouse wheel to scroll horizontally.
PS: this is a similar, but not equal as https://github.com/noraesae/perfect-scrollbar/issues/631
Seems to be an mac-only issue. Working on windows.
Working on my demos on Windows
It works in my mac on demo. Can you specify the env?
It seems like working well in the next branch too. Will close this.
| gharchive/issue | 2017-06-21T19:22:43 | 2025-04-01T06:45:10.185144 | {
"authors": [
"CKGrafico",
"MA-Maddin",
"noraesae",
"renatorib",
"utatti"
],
"repo": "noraesae/perfect-scrollbar",
"url": "https://github.com/noraesae/perfect-scrollbar/issues/661",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
1210612481 | delete some thing in README.md
删除了部分描述
ok
| gharchive/pull-request | 2022-04-21T07:26:46 | 2025-04-01T06:45:10.232442 | {
"authors": [
"normalbe"
],
"repo": "normalbe/x-toolset",
"url": "https://github.com/normalbe/x-toolset/pull/1",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2287292273 | 백그라운드에서 포그라운드로 전환 시, 지도가 표시되지 않는 현상
백그라운드에서 포그라운드로 전환할 때, 지도만 표시되지 않습니다.
지도 캡쳐 등은 정상적으로 동작하구요.
forceRefresh() 함수를 이용해서 강제로 리프레시를 해볼려고 했지만 작동하지 않네요.
포그라운드로 전환 시 강제로 리프레시를 할 수 있는 방법이 있을까요?
현재 현상으로 인해 앱을 배포하지 못하고 있네요ㅠ
언제쯤 해당 현상이 개선될까요?
#201 해당 이슈와 같은 이슈인지 확인 부탁드리겠습니다.
| gharchive/issue | 2024-05-09T09:33:49 | 2025-04-01T06:45:10.262643 | {
"authors": [
"88hwany",
"note11g"
],
"repo": "note11g/flutter_naver_map",
"url": "https://github.com/note11g/flutter_naver_map/issues/230",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
1985958963 | insert date header into file
# 2023-11-09 at the top of the file, which improves things when rendering in the terminal,
https://github.com/notnmeyer/daylog-cli/pull/11
| gharchive/issue | 2023-11-09T16:08:58 | 2025-04-01T06:45:10.318394 | {
"authors": [
"notnmeyer"
],
"repo": "notnmeyer/daylog-cli",
"url": "https://github.com/notnmeyer/daylog-cli/issues/8",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1364673652 | [Feature Request] Add toggle to randomize the seed after generating
Doing multiple generations with the same prompt is problematic when using the same seed as it tends to "deep fry" the image. It would be significantly smoother if koi had a system for randomizing seeds.
I think the expected behavior for this toggle should be:
When you click "dream," if the toggle is on,
Read the current seed value and use that for generation
Replace the current seed value with a random seed
If the toggle is on, and you click it on,
Replace the current seed value with a random seed
If the toggle is on, and you edit the seed,
Turn the toggle off
This allows for manual intervention with seeds when desired, and maintains clarity as to which seed is being used while randomness is on.
If the toggle is on, and you click it on
You mean that when the toggle is on and the user clicks on the text field for the seed input, it generates a new one ?
This whole behavior might actually not be that difficult to make, I might try to work on this issue ! Gotta get a grip of how Krita handles UI, but with what @nousr already has made, it shouldn't be that hard to implement ;)
@tocram1 awesome! I'll assign this one to you--let me know if you get stuck or want me to take you off of it 👍
Oops! That was a typo, I meant to write:
If the toggle is off, and you click it on,
Replace the current seed value with a random seed
| gharchive/issue | 2022-09-07T13:33:06 | 2025-04-01T06:45:10.334128 | {
"authors": [
"aaronsantiago",
"nousr",
"tocram1"
],
"repo": "nousr/koi",
"url": "https://github.com/nousr/koi/issues/39",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
123632279 | 'file exists' template filter
This template filter would allow for testing that a 'pimp' file exists.
@toopy: Do you mind have a quick look at this PR please? :wink: I fixed unittests. If it looks good to you, we can merge this one as well. Thanks.
| gharchive/pull-request | 2015-12-23T10:03:51 | 2025-04-01T06:45:10.336074 | {
"authors": [
"amessinger",
"wo0dyn"
],
"repo": "novafloss/django-pimpmytheme",
"url": "https://github.com/novafloss/django-pimpmytheme/pull/20",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
1755532792 | Missing fonts
Hello,
after the EsoCRM update to 7.5.1
I get an error message that 3 fonts are missing.
Ok, I was able to fix the upper error.
I had put the modern.css back on because I had made adjustments there.
But parts of it were changed and therefore the prompt font was not found.
But now the following error messages are displayed:
Hi, seems like the resource is found but blocked by CORS policy. I don't have these issues in my production enviroment running 7.5.0 but I will check this week :)
Are you running Apache, nginx or any other proxy?
My server was set to Apache. I switched to nginx as a test and see no difference.
As far as I can see, this problem is no longer in 7.5.5.
After all, it didn't bother me. I actually came across it because of some other troubleshooting.
As far as I can see, this problem is no longer in 7.5.5.
After all, it didn't bother me. I actually came across it because of some other troubleshooting.
| gharchive/issue | 2023-06-13T19:32:12 | 2025-04-01T06:45:10.344792 | {
"authors": [
"ChrisSka",
"novastream"
],
"repo": "novastream/Modern-Theme",
"url": "https://github.com/novastream/Modern-Theme/issues/5",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1393803915 | [Bug] Cannot send message to your squad through mobile !!
The "send" button is not visible for mobile users and neither are you able to send message through "enter" button.
Please add some margin at the bottom.
Is the lowercase intentional ?
Do you want to fix it? :)
Yes!!
Go for it :)
In order to fix this i need to sign in to the site. But how do i to that here?
@sumitbishti , May I work on this issue?
Still can't send messages to teammates on the mobile plus problems in hovering
| gharchive/issue | 2022-10-02T15:29:47 | 2025-04-01T06:45:10.356930 | {
"authors": [
"Anushka123-star",
"diwash007",
"nevo-david",
"sumitbishti"
],
"repo": "novuhq/hacksquad-website",
"url": "https://github.com/novuhq/hacksquad-website/issues/18",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1258252117 | Make Audit Bucket Glacier Lifecycle Configuration Optional
Is your feature request related to a problem? Please describe.
Glacier requests can add up to a significant expense, making this lifecycle policy not worth the cost for environments with more modest security requirements.
Describe the solution you'd like
Allow the glacier lifecycle configuration to be optional, either through a new parameter or by setting the transition days to 0 (even though this is a valid lifecycle configuration option).
Describe alternatives you've considered
Setting up an external audit bucket with the appropriate lifecycle configuration, though this is a lot of overkill and boilerplate code to address such a small part of the configuration.
Thank you for your suggestion!
I've checked around the official AWS document and found that transitioning small objects is not recommended. #293 will make these rules optional as well as disabling them by default.
https://docs.aws.amazon.com/AmazonS3/latest/userguide/lifecycle-transition-general-considerations.html
Released in v2.0.0
Thanks for the quick turnaround. I was also unaware of this, until I saw our AWS bill. Always learning.
| gharchive/issue | 2022-06-02T14:18:25 | 2025-04-01T06:45:10.392600 | {
"authors": [
"derylseale",
"nozaq"
],
"repo": "nozaq/terraform-aws-secure-baseline",
"url": "https://github.com/nozaq/terraform-aws-secure-baseline/issues/291",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
523693534 | Fix/tokens
Description
Currently, if gitlab returns a token starting with - aws ssm fails to push and furthermore the token fails. This PR fixes it
Migrations required
NO
@rin-amezquita thanks will check this week
| gharchive/pull-request | 2019-11-15T21:19:41 | 2025-04-01T06:45:10.399300 | {
"authors": [
"npalm",
"rin-amezquita"
],
"repo": "npalm/terraform-aws-gitlab-runner",
"url": "https://github.com/npalm/terraform-aws-gitlab-runner/pull/159",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
421961520 | Used invariant culture to generate interval expressions
Fixes #847
@roji You mean a test? If so, could you provide an example of it and where it should be placed?
@YohDeadfall no not a test, just confirmation that it actually solves the issue :)
| gharchive/pull-request | 2019-03-17T19:16:08 | 2025-04-01T06:45:10.407391 | {
"authors": [
"YohDeadfall",
"roji"
],
"repo": "npgsql/Npgsql.EntityFrameworkCore.PostgreSQL",
"url": "https://github.com/npgsql/Npgsql.EntityFrameworkCore.PostgreSQL/pull/852",
"license": "PostgreSQL",
"license_type": "permissive",
"license_source": "github-api"
} |
329163126 | Update Python requirements
@alykat: I tested this new set of requirements against a few of the core dailygraphics commands, and didn't experience any issues. It's worth testing yourself, and kick it around on a new graphic. I'd probably recommend creating a new Python virtual environment to test this, so that if it's finicky you can undo the changes.
For all libraries I upgraded to the newest bugfix version (assuming semver)
I upgraded Fabric to the newest minor version to remove a dependency on pycrypto, which is insecure and is also no longer supported (since 2016)
I upgraded requests to the newest major version to remove a security bug
A few of the libraries in this file were dependencies-of-dependencies, and I have removed them; this is technically proper practice for requirements.txt if the repo also utilizes a setup.py, but in our case there's more of a risk of versions becoming improperly pinned (eg)
Once this PR is merged in, then I'll enable Snyk "Fail if the repo has any vulnerabilities (high-severity)" for this repo, thereafter warning on PRs if new high-severity issues exist (or develop!) in Python dependencies.
Got a bunch of errors like this during the render process:
WARNING: Couldn't write lextab module <module 'slimit.lextab' from '/Users/AHurt/.virtualenvs/dg-python-reqs/lib/python2.7/site-packages/slimit/lextab.pyc'>. Won't overwrite existing lextab module
WARNING: yacc table file version is out of date
WARNING: Token 'IMPORT' defined, but not used
WARNING: Token 'BLOCK_COMMENT' defined, but not used
WARNING: Token 'ENUM' defined, but not used
WARNING: Token 'EXTENDS' defined, but not used
WARNING: Token 'LINE_COMMENT' defined, but not used
WARNING: Token 'LINE_TERMINATOR' defined, but not used
WARNING: Token 'CONST' defined, but not used
WARNING: Token 'EXPORT' defined, but not used
WARNING: Token 'CLASS' defined, but not used
WARNING: Token 'SUPER' defined, but not used
WARNING: There are 10 unused tokens
I ran into this during GDPR, too. I was able to fix the error by bumping slimit and specifying a different version of ply (a slimit dependency).
slimit==0.8.1
ply==3.4
| gharchive/pull-request | 2018-06-04T18:17:16 | 2025-04-01T06:45:10.576406 | {
"authors": [
"alykat",
"mileswwatkins"
],
"repo": "nprapps/dailygraphics",
"url": "https://github.com/nprapps/dailygraphics/pull/274",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
1670196421 | Column concatenation in .pdb file when residue index > 999
I was running RFdiffusion with dl_binder_design pipeline when suddenly I started randomly getting error about having non-unique residue indices. It seemed strange at first, as the output file from RFdiffusion was intact (chain A - binder, chain B - target, each residue index is unique). However, this is what happens when the residue index > 999:
ATOM 15333 2HE MET B 999 29.180 -52.646 -15.485 1.00 0.00 H
ATOM 15334 3HE MET B 999 27.618 -52.456 -14.654 1.00 0.00 H
ATOM 15335 N GLY B1000 26.291 -58.439 -16.471 1.00 0.00 N
ATOM 15336 CA GLY B1000 26.746 -59.823 -16.438 1.00 0.00 C
At res.index >= 1000, columns 5 (chain) and 6 (res.index) are concatenated, so the input_check function inside interfaceAF2predict.py script is checking the wrong column at this point.
I guess this is linked to .pdb format itself. I implemented a small fix to get around this.
Thanks for notifying me about this issue and opening a PR. I have a slightly different implementation of this fix in mind and I will push that today.
I've pushed a fix to this. Thanks again!
| gharchive/pull-request | 2023-04-16T23:43:28 | 2025-04-01T06:45:10.581757 | {
"authors": [
"alllirik",
"nrbennet"
],
"repo": "nrbennet/dl_binder_design",
"url": "https://github.com/nrbennet/dl_binder_design/pull/8",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1521188580 | Update sbt, scripted-plugin to 1.8.2
Updates
org.scala-sbt:sbt
org.scala-sbt:scripted-plugin
from 1.6.2 to 1.8.2.
GitHub Release Notes - Version Diff
I'll automatically update this PR to resolve conflicts as long as you don't change it yourself.
If you'd like to skip this version, you can just close this PR. If you have any feedback, just mention me in the comments below.
Configure Scala Steward for your repository with a .scala-steward.conf file.
Have a fantastic day writing Scala!
Adjust future updates
Add this to your .scala-steward.conf file to ignore future updates of this dependency:
updates.ignore = [ { groupId = "org.scala-sbt" } ]
Or, add this to slow down future updates of this dependency:
dependencyOverrides = [{
pullRequests = { frequency = "@monthly" },
dependency = { groupId = "org.scala-sbt" }
}]
labels: library-update, early-semver-minor, semver-spec-minor, commit-count:1
Superseded by #292.
| gharchive/pull-request | 2023-01-05T18:24:52 | 2025-04-01T06:45:10.692888 | {
"authors": [
"scala-steward"
],
"repo": "nrinaudo/kantan.sbt",
"url": "https://github.com/nrinaudo/kantan.sbt/pull/278",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
1428444611 | Revert "Added documentation about escaping values for $GITHUB_ENV file (#21582)"
Reverts nrz77/docs#1
c
| gharchive/pull-request | 2022-10-29T20:34:17 | 2025-04-01T06:45:10.707741 | {
"authors": [
"nrz77"
],
"repo": "nrz77/docs",
"url": "https://github.com/nrz77/docs/pull/2",
"license": "CC-BY-4.0",
"license_type": "permissive",
"license_source": "github-api"
} |
560617808 | Fixes for following formats: vtk, medit, med
VTK
saving call data was not correctly implemented, tags CELL_DATA and FIELD FieldData should be in VTK file only once
limit to FeildData dimension has no reasons
MEDIT
"medit:ref" for cells must be a list of arrays containing group ids
MED
FAS object can be located in the mesh object, not only in the root
The fix for vtk should solve #643.
Oh, I thought the bug #643 was in reading MSH 2.2 rather than the subsequent write to VTK.
Yes, I see, there is another issue. In the MSH input file in #643, the points are not numbered in a continuous range.
| gharchive/pull-request | 2020-02-05T20:43:09 | 2025-04-01T06:45:10.714816 | {
"authors": [
"gdmcbain",
"vlukes"
],
"repo": "nschloe/meshio",
"url": "https://github.com/nschloe/meshio/pull/665",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
1948183576 | CDR from pm_icecon
Brings in the CLI and other logic specific to the CDR from pm_icecon. See the associated PR for that repo here: https://github.com/nsidc/pm_icecon/pull/27
Note that we expect tests to fail on this branch until https://github.com/nsidc/pm_icecon/pull/27 is merged.
| gharchive/pull-request | 2023-10-17T20:19:48 | 2025-04-01T06:45:10.760592 | {
"authors": [
"trey-stafford"
],
"repo": "nsidc/seaice_ecdr",
"url": "https://github.com/nsidc/seaice_ecdr/pull/9",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
865808901 | Template for ElementVideo missing
When adding n ElementVideo element to my page I get the following error:
[User Warning] None of the following templates could be found: Array ( [0] => NSWDPC\Elemental\Models\FeaturedVideo\ElementVideo_ElementalArea_Normal [1] => NSWDPC\Elemental\Models\FeaturedVideo\ElementVideo_Normal [2] => NSWDPC\Elemental\Models\FeaturedVideo\ElementVideo_ElementalArea [3] => NSWDPC\Elemental\Models\FeaturedVideo\ElementVideo [4] => NSWDPC\Elemental\Models\FeaturedVideo\ElementVideo_ElementalArea_Normal [5] => NSWDPC\Elemental\Models\FeaturedVideo\ElementVideo_Normal [6] => NSWDPC\Elemental\Models\FeaturedVideo\ElementVideo_ElementalArea [7] => NSWDPC\Elemental\Models\FeaturedVideo\ElementVideo ) in themes "Array ( [0] => s2hub [1] => $default ) "
I don't find the relevant template for that element in the module.
Again, I forgot to add this, thanks for picking it up.
This has been fixed, and I've bumped the version.
| gharchive/issue | 2021-04-23T06:57:39 | 2025-04-01T06:45:10.795356 | {
"authors": [
"tardinha",
"wernerkrauss"
],
"repo": "nswdpc/silverstripe-elemental-feature-video",
"url": "https://github.com/nswdpc/silverstripe-elemental-feature-video/issues/3",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.