id stringlengths 4 10 | text stringlengths 4 2.14M | source stringclasses 2
values | created timestamp[s]date 2001-05-16 21:05:09 2025-01-01 03:38:30 | added stringdate 2025-04-01 04:05:38 2025-04-01 07:14:06 | metadata dict |
|---|---|---|---|---|---|
1351447818 | High CPU load (Virtual Machine Worker Process + System) when session is IDLE
Version
Win: 10.0.22622.575 WSL: 0.66.2.0 Kernel: 5.15.57.1
WSL Version
[X] WSL 2
[ ] WSL 1
Kernel Version
5.15.57.1
Distro Version
Ubuntu 22.04
Other Software
No response
Repro Steps
Install WSL2/Ubuntu and start a session. Obeserve in Windows Task manager high load on both "Virtual Machine Worker Process" and "System". Both taking 15% CPU each for a combined 30+% CPU load. After wsl --shutdown the issue is gone.
Expected Behavior
No significant CPU usage when Linux/WSL is IDLE.
Actual Behavior
This is on my Surface Pro X (aarch64) - other ARM machine users for instance with the new Thinkpad X13S have reported similar obervations.
Both "Virtual Machine Worker Process" and "System". Each taking roughly 15% CPU for a combined 30% CPU load.
Linux System is idle:
wsl --system -d Ubuntu Top
PID USER PR NI VIRT RES %CPU %MEM TIME+ S COMMAND
1 root 20 0 2.1m 1.5m 0.0 0.0 0:00.02 S /init
2 root 20 0 2.2m 1.6m 0.0 0.0 0:00.01 S - /init 27 root 20 0 2.3m 0.1m 0.0 0.0 0:00.00 S - /init
28 root 20 0 2.3m 0.1m 0.0 0.0 0:00.07 S - /init 30 wslg 20 0 9.5m 4.4m 0.0 0.1 0:00.09 S - -bash
7 root 20 0 2.1m 0.2m 0.0 0.0 0:00.00 S - /init 8 root 20 0 45.9m 7.3m 0.0 0.1 0:00.01 S - /usr/bin/WSLGd
11 wslg 20 0 674.9m 34.3m 0.0 0.4 0:00.55 S - /usr/bin/weston --backend=rdp-backend.so --modules=wslgd-notify.so --xwa+ 15 wslg 20 0 18.5m 7.2m 0.0 0.1 0:00.03 S - /usr/libexec/weston-rdprail-shell
31 wslg 20 0 44.7m 17.7m 0.0 0.2 0:00.04 S - /usr/bin/Xwayland :0 -rootless -core -listen 37 -wm 38 -terminate -n+ 16 wslg 20 0 2.0m 1.5m 0.0 0.0 0:00.00 S - /init /mnt/c/Program Files/WindowsApps/MicrosoftCorporationII.WindowsSub+
17 message+ 20 0 7.8m 3.3m 0.0 0.0 0:00.02 S - /usr/bin/dbus-daemon --syslog --nofork --nopidfile --system 18 wslg 20 0 230.1m 7.2m 0.0 0.1 0:00.06 S - /usr/bin/pulseaudio --log-target=file:/mnt/wslg/pulseaudio.log --load=mo+
22 wslg 20 0 7.6m 0.3m 0.0 0.0 0:00.00 S - /usr/bin/dbus-daemon --syslog --fork --print-pid 6 --print-address 8 --session 93 root 20 0 2.1m 0.1m 0.0 0.0 0:00.00 S - /init
94 root 20 0 2.1m 0.1m 0.0 0.0 0:00.00 S - /init 95 wslg 20 0 6.4m 2.1m 0.0 0.0 0:00.02 R - top
Diagnostic Logs
WslLogs-2022-08-24_17-14-22.zip
Much appreciated 👍
I assume this has not been reviewed by WSLG developers yet? Let me stress, that this is a very serious issue, which makes WSL almost un-usable on many if not most ARM devices. Issue is, that, this constant 30+% load on the system affects responsiveness, battery drain and available processing resources - you essentially have to shutdown WSL when not in use.
Let me know if you need more logs - as I initially only uploaded the WSL logs.
Same here on a Windows on ARM - Qualcomm 8cx gen3 (Thinkpad X13s)
Wondered why runtime is so bad until I noticed the 30% CPU penalty.
Need to shutdown WSL if not absolutely needed.
@puzzone, @Gerdya, to narrow down the cause, would you please disable WSL GUI support by below, and check if you observe the issue? Please set below in .wslconfig, thanks!
[wsl2]
guiApplications=false
For further details, please refer https://docs.microsoft.com/en-us/windows/wsl/wsl-config.
Issue still present!
@dtgeorge, would you please collect WSL log by following instruction at 8) at https://github.com/Microsoft/WSL/blob/master/CONTRIBUTING.md ? thanks!
WslLogs-2022-09-12_23-15-31.zip
@dtgeorge, thanks for the log. your log doesn't exhibit the issue @benhillis noted at https://github.com/microsoft/wslg/issues/819#issuecomment-122775983, so next step would be collecting more detailed log, would you please do below?
extract log.wprp from log.zip
make sure WSL is idle and CPU high load issue is happening.
from run admin privileged cmd.exe, run wpr -start log.wprp!GPUView.Verbose -filemode
wait 3 sec, then wpr -stop output.etl
please share output.etl with us.
Thanks!
I did upload both, logs and output.etl (output.zip)
WslLogs-2022-09-12_23-47-21.zip
output.zip
@hideyukn88 , does @Gerdya logs enough?
Hi all,
please find my logs with wslg dis- and enabled below.
And big thanks for your help!
wslg-DISABLED.zip
wslg-ENABLED.zip
@hideyukn88 , does @Gerdya logs enough?
Yes, the output.etl is what we need, thanks!
Now we understood the issue and the fix is being made. The fix will be at Windows OS side (not WSL/WSLg), thus ETA for fix is still unknown, once I know more, will post here, thanks!
@hideyukn88 Thanks for root-causing the issue!
Looks like it is fixed in the dev channel build 25211
Preview Build 25211 Release Notes
Hopefully beta channel is next.
Is there deployment ETA for the normal Win 11 build? Still an issue on Pro X.
It is been a month since the dev channel fix. Nothing on the other channels as far :(
Seems to have been fixed!
Edition Windows 11 Pro Version 22H2 OS build 22621.900 Serial number 035099294753 Experience Windows Feature Experience Pack 1000.22638.1000.0
Idle CPU usage was around 1%-3% with Ubuntu 22.04 running along with a bunch of other apps (Firefox, etc.). Excellent!
Still experiencing this issue Edition Windows 11 Pro
Version 22H2
Installed on 8/28/2022
OS build 22621.755
Experience Windows Feature Experience Pack 1000.22636.1000.0
To add, I have a Surface Pro 9 ARM variant, Virtual Machine Worker Process is using 12.5% CPU and System using 12.5% while Debian on the WSL side is fully idle.
@hideyukn88 @gerdya To add, I have a Surface Pro 9 ARM variant, Virtual Machine Worker Process is using 12.5% CPU and System using 12.5% while Debian on the WSL side is fully idle.
| gharchive/issue | 2022-08-24T15:16:07 | 2025-04-01T06:45:00.237380 | {
"authors": [
"Gerdya",
"Macmee",
"MdeRhoter",
"dtgeorge",
"hideyukn88",
"leuler27",
"puzzone",
"vanc"
],
"repo": "microsoft/wslg",
"url": "https://github.com/microsoft/wslg/issues/819",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2269324992 | Error in Python when serializing complex arrays that are not C-contiguous
Given a model containing arrays of complex numbers, e.g.
MyProtocol: !protocol
sequence:
array: complexfloat[coils, samples]
If the NumPy array being serialized is not "C-contiguous", e.g.
with demo.BinaryMyProtocolWriter("out.bin") as w:
w.write_array(
np.zeros((2, 128), dtype=np.complex64, order="F")
)
yardl's binary serializer throws the following error:
Traceback (most recent call last):
File "/workspaces/yardl/joe/issue-#000/python/test.py", line 25, in <module>
main()
File "/workspaces/yardl/joe/issue-#000/python/test.py", line 7, in main
w.write_array(np.zeros((2, 128), dtype=np.complex64, order="F"))
File "/workspaces/yardl/joe/issue-#000/python/demo/protocols.py", line 52, in write_array
self._write_array(value)
File "/workspaces/yardl/joe/issue-#000/python/demo/binary.py", line 35, in _write_array
_binary.NDArraySerializer(_binary.complexfloat32_serializer, 2).write(self._stream, value)
File "/workspaces/yardl/joe/issue-#000/python/demo/_binary.py", line 1235, in write
self._write_data(stream, value)
File "/workspaces/yardl/joe/issue-#000/python/demo/_binary.py", line 1129, in _write_data
self.element_serializer.write_numpy(stream, element)
File "/workspaces/yardl/joe/issue-#000/python/demo/_binary.py", line 346, in write_numpy
stream.write(self._struct, value)
File "/workspaces/yardl/joe/issue-#000/python/demo/_binary.py", line 129, in write
formatter.pack_into(self._buffer, self._offset, *args)
struct.error: pack_into expected 2 items for packing (got 1)
This is caused by the following events:
NDArraySerializerBase checks if the array is trivially serializable: https://github.com/microsoft/yardl/blob/ea42e477ae78939b4327ca690f7c82bf1e2b2162/tooling/internal/python/static_files/_binary.py#L1125-L1129
It is not, because it is not C-contiguous: https://github.com/microsoft/yardl/blob/ea42e477ae78939b4327ca690f7c82bf1e2b2162/tooling/internal/python/static_files/_binary.py#L1147-L1151
Therefore, NDArraySerializerBase calls the ComplexFloat32Serializer.write_numpy method, which is implemented by StructSerializer, and uses struct.Struct.pack_into with the format string <ff, which is not valid for a complex number. https://github.com/microsoft/yardl/blob/ea42e477ae78939b4327ca690f7c82bf1e2b2162/tooling/internal/python/static_files/_binary.py#L624-L626
| gharchive/issue | 2024-04-29T15:21:02 | 2025-04-01T06:45:00.243234 | {
"authors": [
"naegelejd"
],
"repo": "microsoft/yardl",
"url": "https://github.com/microsoft/yardl/issues/148",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
84347499 | Compatibility Postgresql and Sqlite
This CMS combines cloud plans and download options. I can use to develop my projects in my server and create sites to clients in cloud. This is rare. Using the same library become us more productive and allows code reuse. Great.
I have an obstacle. In GitHub you says PostgreSQL and SQLite is supported (https://github.com/microweber/microweber), but in instalation page only MySQL is listed (https://microweber.com/docs/guides/start.md - recommended software). I'm accustomed with PostgreSQL and SQLite would be ideal for little projects. So I ask if do you really offer support for PostgreSQL and SQLite in accordance with the Laravel pattern?
This is essential for me. Please answer yes. You will get a client and a contributor.
Hi there,
Thanks for the kind words, we are glad for every developer that finds MW useful.
If you read the README more carefully you'll see that ONLY drivers currently enabled/supported by your system are listed on the installation screen.
Not seeing a given database on that screen may mean one or more of the following:
You haven't installed and/or enabled the PDO driver for that database.
For SQLite it could mean the software itself has not been properly installed and/or configured.
So, basically, you just have to deal with this on PHP config level. One way to check the currently enabled PDO modules is to type php -m | grep pdo in a Linux shell. I'm not sure about Windows, though. You can just type php -m in command prompt and go through the list manually to see if a PDO module is missing for your database of choice.
I can't really help you in more detail as you've given me nothing on your local configuration.
I can, however, assure you the Internet is full of resources on how to enable PDO support for any major database. Just google "sqlite php pdo windows 7" or whatever applies to your setup (i.e. "ms sql server php pdo debian").
Cheers! Enjoy and share your results with Microweber! We love to see the code in action.
I assume this resolves the issue because it's been two weeks without a follow-up.
| gharchive/issue | 2015-06-03T04:07:11 | 2025-04-01T06:45:00.320312 | {
"authors": [
"ash-rain",
"valle6"
],
"repo": "microweber/microweber",
"url": "https://github.com/microweber/microweber/issues/290",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
3183889 | Remove unused methods from Sitemap::Store
If these end up being needed again it'd be better to just filter the #pages list.
No code is best optimization.
| gharchive/issue | 2012-02-11T07:37:17 | 2025-04-01T06:45:00.323073 | {
"authors": [
"bhollis",
"tdreyno"
],
"repo": "middleman/middleman",
"url": "https://github.com/middleman/middleman/issues/267",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
751702082 | no sshd keys
what i doin wrong and how to solved?
solved
| gharchive/issue | 2020-11-26T16:07:44 | 2025-04-01T06:45:00.324679 | {
"authors": [
"goodpen"
],
"repo": "midl-dev/tezos-remote-signer-os",
"url": "https://github.com/midl-dev/tezos-remote-signer-os/issues/4",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
79989973 | Not getting back Status
I am doing an initial search for Movies with a search term ("Scorpion King" to be exact) and am getting no Status field value, just null. My app does not need to show Released movies in the search results. Do I have to get individual results to get the Status? Am I just supposed to figure it out with the Release Date value?
Same as other issue. Should have retrieved individual movies at a time.
| gharchive/issue | 2015-05-23T23:55:55 | 2025-04-01T06:45:00.409116 | {
"authors": [
"robmburke"
],
"repo": "miguelhasse/Net.TMDb",
"url": "https://github.com/miguelhasse/Net.TMDb/issues/4",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
457546106 | [WIP] add initial dockerfile for building tb3 packages
Signed-off-by: Mikael Arguedas mikael.arguedas@gmail.com
@mikaelarguedas ros-planning/navigation2#860 has been merged
Great! thanks @crdelsey for the update!
Note to self about current status:
Currently released version of navigation (0.2.2) does not work for the demo because of map issues.
Release candidate 0.2.3 seems to fix the issue.
With the repos listed in this PR as of https://github.com/mikaelarguedas/tb3_demo/pull/1/commits/56711e0752276204ad43e58f4fde9a4a00341993 allow to run the turtlebot3 demo using Gazebo and the navigation stack (without SROS2).
Currently unable to get the demo working with SROS2.
https://github.com/mikaelarguedas/tb3_demo/pull/1/commits/d11964b1b19439f8ced5224701b994db5cac967b adds a policy file with most of the policies necessary to get it to launch without issue. A couple of wildcards have been put as I haven't been able to figure out yet the missing permissions for amcl / amcl_rclcpp_node
Even using wildcards for all nodes / all topics / all services, the demo doesn't work as amcl never receives the map when running with security. Need to check if it's related to QoS or the overhead of adding security on top of a pretty heavy system.
Failure was due to temporary nodes created by the nav stack. Took a bit of time to track them down but here is the list of nodes that need to be allowed on the network for the demo to run:
/_client_node
/_ros2cli # this one is for my debugging and not required
/amcl
/amcl_rclcpp_node
/bt_navigator
/bt_navigator_client_node
/bt_navigator_global_localization_client
/bt_navigator_rclcpp_node
/dwb_controller
/dwb_controller_rclcpp_node
/gazebo
/global_costmap
/global_costmap/global_costmap
/global_costmap/global_costmap_rclcpp_node
/global_costmap_client
/global_localization
/launch_ros
/lifecycle_manager
/lifecycle_managerservice_client
/local_costmap
/local_costmap/local_costmap
/local_costmap/local_costmap_rclcpp_node
/local_costmap_client
/map_server
/navfn_planner
/navfn_planner_GetCostmap_client
/navfn_planner_GetRobotPose_client
/navfn_planner_rclcpp_node
/navigation_dialog_action_client
/recoveries
/recovery_GetRobotPose_client
/robot_state_publisher
/rviz2
/transform_listener_impl
/turtlebot3_diff_drive
/turtlebot3_imu
/turtlebot3_joint_state
/turtlebot3_laserscan
/world_model
I used this command to create them with wildcards:
ros2 security generate_artifacts -k keystore -n /lifecycle_manager /navigation_dialog_action_client /navfn_planner_rclcpp_node /navfn_planner /global_costmap_client /gazebo /lifecycle_managerservice_client /dwb_controller_rclcpp_node /recovery_GetRobotPose_client /launch_ros /dwb_controller /turtlebot3_diff_drive /amcl_rclcpp_node /bt_navigator_rclcpp_node /robot_state_publisher /global_costmap /global_costmap/global_costmap_rclcpp_node /global_costmap/global_costmap /turtlebot3_laserscan /bt_navigator /map_server /world_model /local_costmap_client /transform_listener_impl /local_costmap /local_costmap/local_costmap_rclcpp_node /local_costmap/local_costmap /navfn_planner_GetRobotPose_client /recoveries /navfn_planner_GetCostmap_client /amcl /rviz2 /turtlebot3_imu /_ros2cli /turtlebot3_joint_state /bt_navigator_client_node /_client_node /global_localization /bt_navigator_global_localization_client
As @ruffsl was able to reproduce this demo with encryption enabled, I will merge this first version.
Improvements to the policy files and documentation will be done in follow-up PRs
FYI @thomas-moulard @olaldiko
| gharchive/pull-request | 2019-06-18T15:21:47 | 2025-04-01T06:45:00.422921 | {
"authors": [
"crdelsey",
"mikaelarguedas"
],
"repo": "mikaelarguedas/tb3_demo",
"url": "https://github.com/mikaelarguedas/tb3_demo/pull/1",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1555230489 | missing 'books'-folder
Title says it. I guess it should be in the 'BookGPT'-folder?
Do you mean that you couldn't clone the git, or you cannot find where your book is saved? If the latter, you will find it in .../BookGPT/src/ folder
Changed it in the README file.
Thanks, I just found it. Expected it to be named by the title provided..
| gharchive/issue | 2023-01-24T15:51:49 | 2025-04-01T06:45:00.424619 | {
"authors": [
"SimonB97",
"malore350",
"mikavehns"
],
"repo": "mikavehns/BookGPT",
"url": "https://github.com/mikavehns/BookGPT/issues/8",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
3494219 | Attempt to recover from non-uri-encoded location headers
Hey Mikeal,
I came across a strange bug while trying to use request to download some files from github. From what I could tell, what was happening was that a redirect uri to cloud.github.com has a space in it, and that this space on the second request confused github's servers.
I was able to get a fix together for myself at least, so I'm sharing. I'm sure you can think of a better way to do this check, but at least this gives you an idea?
Anyway.
maybe the issue is that we're decoding the UI completely with resolve when it should be encoded in the header.
what happens when you remove that if statement and just urlEncode all location headers.
Some location headers are already encoded, and that just gets it more confused.
As an aside, html entities that start with a % will mess up a naive implementation of logref interpolation. Is yours guarded against this? Mine wasn't, and I got some preeetty funny logging messages.
logref is my library, i don't need to be PR'd :)
well no, but it's the nice thing to do right?
haha, i read that as PR == Public Relations not PR === Pull Request. yes, a pull request would be excellent :)
| gharchive/issue | 2012-03-04T03:17:15 | 2025-04-01T06:45:00.432164 | {
"authors": [
"jesusabdullah",
"mikeal"
],
"repo": "mikeal/request",
"url": "https://github.com/mikeal/request/issues/200",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
864977641 | Can this be made to always show with the inline viewer on mobile, with an override option to always show modal?
Is it possible to have the datepicker always open with the inline mode if on a mobile browser, even if set as modal in the options? The modal option doesn't work well with browsers on my phone for instance. Perhaps an option to over-ride inlin on mobile could also be include if is necessary to show the modal version all the time.
I will try to optimise the animations on mobile or switch them off, i think this will help.
For my purposes the latest development commit fixes the issue
| gharchive/issue | 2021-04-22T13:51:38 | 2025-04-01T06:45:00.445248 | {
"authors": [
"ashenshugarRET",
"mikecoj"
],
"repo": "mikecoj/MCDatepicker",
"url": "https://github.com/mikecoj/MCDatepicker/issues/12",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2709202198 | add avatar to resume
This pull request introduces a new feature that allows users to include an avatar image in their resumes.
how to use it?
Place the avatar.png with your avatar or change avatar_path to your avatar path in resume_with_avatar.py
run resume_with_avatar.py
Resumes should not include pictures or avatars. Feel free to fork if this functionality is important to you.
| gharchive/pull-request | 2024-12-01T13:35:44 | 2025-04-01T06:45:00.472544 | {
"authors": [
"luca1iu",
"mikepqr"
],
"repo": "mikepqr/resume.md",
"url": "https://github.com/mikepqr/resume.md/pull/46",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
71055084 | Unobserve callbacks are not the observe callbacks
The update callback is wrapped with the bind(this) method (ecmascript 5). You call .bind(this) each time you send the bound callback as parameter. Unfortunately, all the bound callbacks will be different as the bind function recreates a function each time.
So the adapter will be unable to unreference the callbacks and we have a memory leak. See issue #430 in rivets (https://github.com/mikeric/rivets/issues/430)
My opinion : don't use .bind(this) as it is not available in all browsers (especially older androids). Use instead the good old "var self = this" method.
If you implement an adapter using Object.observe and Object.unobserve, then this bug combines with a V8 bug to cause exceptions.
| gharchive/issue | 2015-04-26T11:36:42 | 2025-04-01T06:45:00.475739 | {
"authors": [
"JeffSwenson",
"jccazeaux"
],
"repo": "mikeric/sightglass",
"url": "https://github.com/mikeric/sightglass/issues/10",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
1861625521 | Feature update client whadmin
Changes
Set up Login, Register, Navigation Bar for client-whadmin as well as warehouse page
Debug PageRender file which cannot use old format for rendering (dynamic import function) because of dymanic import of Vite React mechanic
Set up fetch data API for globally but not completed due to need to test when back-end is finished
@LaansDole Add the PR description and then everything is good to merge
Resolve issue #3
| gharchive/pull-request | 2023-08-22T14:45:50 | 2025-04-01T06:45:00.496431 | {
"authors": [
"DeathHunterX",
"LaansDole",
"miketvo"
],
"repo": "miketvo/rmit-isys2099-group9-app",
"url": "https://github.com/miketvo/rmit-isys2099-group9-app/pull/34",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
172398295 | Add more texture properties
This PR allows you to retrieve the information 'format', 'type, 'mag', 'min', 'wrapS' , 'wrapT', after the texture creation.
These properties can be accessed after texture creation like this:
var t = regl.texture({
shape: [16, 16],
min: 'nearest mipmap linear',
mag: 'linear',
wrapS: 'mirror',
wrapT: 'repeat',
format: 'rgb',
type: 'uint8'
})
console.log('f: ', t.min, t.mag, t.wrapS, t.wrapT, t.format, t.type)
this PR resolves issue #273.
What about cube maps and framebuffer objects?
Whoops. I will make a new issue for that.
| gharchive/pull-request | 2016-08-22T08:46:51 | 2025-04-01T06:45:00.512318 | {
"authors": [
"Erkaman",
"mikolalysenko"
],
"repo": "mikolalysenko/regl",
"url": "https://github.com/mikolalysenko/regl/pull/276",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
1500890371 | Use NVML instead of parsing nvidia-smi output
The latest version of milabench breaks because of dysfunctional parsing in https://github.com/mila-iqia/milabench/blob/master/milabench/gpu.py . Use pynvml https://pypi.org/project/pynvml/ instead.
root@fdea0e909c74:/workspace/milabench# milabench run --base standard-cuda/ config/standard-cuda.yaml
There was a problem with nvidia-smi:
================================================================================
b'ERROR: no input message specified\n'
================================================================================
There was a problem with nvidia-smi:
================================================================================
b'ERROR: no input message specified\n'
================================================================================
[BEGIN] Reports directory: /workspace/milabench/standard-cuda/runs/dijuzaza.2022-12-16_21:20:30.417110
There was a problem with nvidia-smi:
================================================================================
b'ERROR: no input message specified\n'
================================================================================
Traceback (most recent call last):
File "/opt/anaconda/bin/milabench", line 8, in <module>
sys.exit(main())
File "/workspace/milabench/milabench/cli.py", line 23, in main
run_cli(Main)
File "/opt/anaconda/lib/python3.9/site-packages/coleo/cli.py", line 628, in run_cli
return call(opts=opts, args=args)
File "/opt/anaconda/lib/python3.9/site-packages/coleo/cli.py", line 587, in thunk
result = fn(*args)
File "/workspace/milabench/milabench/cli.py", line 192, in run
mp.do_run(
File "/workspace/milabench/milabench/multi.py", line 116, in do_run
for run in method(cfg, **plan):
File "/workspace/milabench/milabench/multi.py", line 34, in per_gpu
gpus = get_gpu_info().values()
AttributeError: 'NoneType' object has no attribute 'values'
Yeah, it appears xml2json failed for some reason. Using pynvml instead is a good idea, I'll take care of it. Thanks!
Done in #40
| gharchive/issue | 2022-12-16T21:32:06 | 2025-04-01T06:45:00.517676 | {
"authors": [
"breuleux",
"gravitino"
],
"repo": "mila-iqia/milabench",
"url": "https://github.com/mila-iqia/milabench/issues/38",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
120264425 | EASDG
@abergeron @lamblin @nouiz @mgermain
The distributed algorithm I would like to try and implement is EASGD (arXiv:1412.6651), which is beautifully simple (see algorithm 1 in the paper):
x = params.get_value()
# x_tilde are the memory-mapped shared parameters
diff = alpha * (x - x_tilde)
params.set_value(x - diff)
x[...] = x + diff
In general, distributed algorithms can probabably take the form update_params(shared_params, worker_params), which is a function that is expected to update both in place, so:
easdg(x_tilde, x):
diff = alpha * (x - x_tilde)
x_tilde[...] = x - diff
x[...] = x + diff
x = params.get_value()
easdg(x, x_tilde)
params.set_value(x)
Closing the issue as this has been implemented.
| gharchive/issue | 2015-12-03T20:37:50 | 2025-04-01T06:45:00.519958 | {
"authors": [
"bartvm",
"carriepl"
],
"repo": "mila-udem/platoon",
"url": "https://github.com/mila-udem/platoon/issues/1",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
233179988 | Don't insert newline into noop ArrowFunctionExpression
Input:
const f1 = () => {};
const f2 = function() {};
Output:
const f1 = () => {
};
const f2 = function() {};
I have a PR for this incoming.
merged 29045f0
| gharchive/issue | 2017-06-02T13:16:52 | 2025-04-01T06:45:00.536674 | {
"authors": [
"langdonx",
"millermedeiros"
],
"repo": "millermedeiros/esformatter",
"url": "https://github.com/millermedeiros/esformatter/issues/487",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
938062718 | Navigation bar items are not dynamic
Titles, back buttons, etc in navigation bars do not resize based on accessibility font.
Report this to apple
| gharchive/issue | 2021-07-06T16:20:26 | 2025-04-01T06:45:00.537502 | {
"authors": [
"AmandaJackson6",
"milnel2"
],
"repo": "milnel2/blocks4alliOS",
"url": "https://github.com/milnel2/blocks4alliOS/issues/179",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1715151692 | [Feature]: Add the remaining partition APIs
Is there an existing issue for this?
[X] I have searched the existing issues
Is your feature request related to a problem? Please describe.
I willl add the apis:
drop_partition
load_partitions
release_partitions
Also, there are some warnings under the rust clippy rules. I'll fix it.
Describe the solution you'd like.
No response
Describe an alternate solution.
No response
Anything else? (Additional Context)
No response
Added with #56
| gharchive/issue | 2023-05-18T07:21:19 | 2025-04-01T06:45:00.552087 | {
"authors": [
"kingzcheung",
"yah01"
],
"repo": "milvus-io/milvus-sdk-rust",
"url": "https://github.com/milvus-io/milvus-sdk-rust/issues/55",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
851412289 | いいところの診断結果に優しさを追加したい
以下の結果を追加したい
'{UserName}のいいところは優しさです。あなたの優しい雰囲気や立ち振る舞いに多くの人が癒やされています。'
これから対応します。
940658e で対応されました。
| gharchive/issue | 2021-04-06T13:14:28 | 2025-04-01T06:45:00.663986 | {
"authors": [
"minch23"
],
"repo": "minch23/assessment",
"url": "https://github.com/minch23/assessment/issues/1",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1915054241 | 训练自定义数据集
自定义一个车辆检测的训练数据集,为yolo格式,在yolov5-6.2框架上训练无问题,但今日使用mindyolo进行训练出现一定问题。
本机平台平台ubuntu22.04,显卡3080ti,cuda12.0,采用docker方式拉取cuda11.6版本的mindspore镜像,验证GPU版本安装成功,而后安装mindyolo。
训练指令:
python train.py --config ./config/yolov5/yolov5s.yaml --device_target GPU
1.ValueError: invalid literal for int() with base 10:"xxxxxx"
对应错误位置:/mindyolo/mindyolo/data/dataset.py Line 198
self.imgIds = [int(Path(im_file).stem) for im_file in self.img_files]
由于采用自定义数据集,图片并非按照 int value.jpg 的格式保存,故改为
self.imgIds = [index for index in range(len(self.img_files))]
不知这样修改是否对后续产生影响。
2.is_slice的warning,具体如下:
[WARNING] ME(3227:140445071247168,MainProcess):2023-09-27-07:13:00.606.995 [mindspore/ops/primitive.py:814] The "use_copy_slice" is a constexpr fune all constant value. [WARNING] ME(3227:140445071247168,MainProcess):2023-09-27-07:13:00.644.340 [mindspore/ops/primitive.py:814] The "is_slice" is a constexpr function.constant value. [WARNING] ME(3227:140445071247168,MainProcess):2023-09-27-07:13:00.812.601 [mindspore/ops/primitive.py:814] The "use_copy_slice" is a constexpr fune all constant value. [WARNING] ME(3227:140445071247168,MainProcess):2023-09-27-07:13:00.824.830 [mindspore/ops/primitive.py:814] The "is_slice" is a constexpr function.constant value. [WARNING] ME(3227:140445071247168,MainProcess):2023-09-27-07:13:00.836.749 [mindspore/ops/primitive.py:814] The "is_slice" is a constexpr function.constant value. [WARNING] ME(3227:140445071247168,MainProcess):2023-09-27-07:13:00.850.231 [mindspore/ops/primitive.py:814] The "is_slice" is a constexpr function.constant value. [WARNING] ME(3227:140445071247168,MainProcess):2023-09-27-07:13:00.865.217 [mindspore/ops/primitive.py:814] The "is_slice" is a constexpr function.constant value. [WARNING] ME(3227:140445071247168,MainProcess):2023-09-27-07:13:01.195.152 [mindspore/ops/primitive.py:814] The "use_copy_slice" is a constexpr fune all constant value. [WARNING] ME(3227:140445071247168,MainProcess):2023-09-27-07:13:01.200.028 [mindspore/ops/primitive.py:814] The "is_slice" is a constexpr function.constant value. [WARNING] ME(3227:140445071247168,MainProcess):2023-09-27-07:13:01.207.418 [mindspore/ops/primitive.py:814] The "is_slice" is a constexpr function.constant value. [WARNING] ME(3227:140445071247168,MainProcess):2023-09-27-07:13:01.283.436 [mindspore/ops/primitive.py:814] The "is_slice" is a constexpr function.constant value. [WARNING] ME(3227:140445071247168,MainProcess):2023-09-27-07:13:01.291.933 [mindspore/ops/primitive.py:814] The "is_slice" is a constexpr function.constant value. [WARNING] ME(3227:140445071247168,MainProcess):2023-09-27-07:13:01.582.653 [mindspore/ops/primitive.py:814] The "use_copy_slice" is a constexpr fune all constant value. [WARNING] ME(3227:140445071247168,MainProcess):2023-09-27-07:13:01.587.358 [mindspore/ops/primitive.py:814] The "is_slice" is a constexpr function.constant value. [WARNING] ME(3227:140445071247168,MainProcess):2023-09-27-07:13:01.594.711 [mindspore/ops/primitive.py:814] The "is_slice" is a constexpr function.constant value. [WARNING] ME(3227:140445071247168,MainProcess):2023-09-27-07:13:01.602.830 [mindspore/ops/primitive.py:814] The "is_slice" is a constexpr function.constant value. [WARNING] ME(3227:140445071247168,MainProcess):2023-09-27-07:13:01.609.911 [mindspore/ops/primitive.py:814] The "is_slice" is a constexpr function.constant value.
暂不明确warning产生原因。
3.训练的损失为nan,具体情况如下:
2023-09-27 09:05:09,329 [INFO] Epoch 6/300, Step 200/510, step time: 1095.86 ms 2023-09-27 09:05:20,349 [WARNING] overflow, still update, loss scale adjust to 1024.0 2023-09-27 09:05:20,355 [INFO] Epoch 6/300, Step 210/510, imgsize (640, 640), loss: nan, lbox: nan, lobj: nan, lcls: nan, cur_lr: 0.0009835000382736325
该情况在yolov5官方训练上并没出现错误,不知产生的原因,train.py设置了clip_grad=True
mindsopre是一款出色的深度学习框架,同时mindyolo极大的提高了mindspore的工业应用价值,目前采用nvidia GPU进行训练,在问题解决后将采用Ascend平台进行训练,从训练时间来看,mindyolo提供的yolov5其训练速度远快于yolov5 pytorch(相同的batch_size,optimizer等参数),但目前尚存一些问题需要与相关人员沟通,希望的到官方的回复,感谢!
如果mindyolo暂不支持GPU训练,请问mindspore提供的yolov5代码库可以进行训练和使用吗?
首先非常感谢您的关注和使用,针对上述问题,问题1、2应该是不影响的,对于问题3,我们当前暂时还不支持gpu训练,nan的问题可能是反向计算梯度的过程中出现的,您这边如果有ascend的机器可以尝试迁到ascend上训练看看。
首先非常感谢您的关注和使用,针对上述问题,问题1、2应该是不影响的,对于问题3,我们当前暂时还不支持gpu训练,nan的问题可能是反向计算梯度的过程中出现的,您这边如果有ascend的机器可以尝试迁到ascend上训练看看。
收到,感谢回复,因为初次使用ascend等相关设备,请问atlas计算卡是否支持mindspore及mindyolo?以及是否有可以参考的用例及教程方便相关人员使用?
支持的 可以直接按照 GETTING_STARTED.md 中的步骤进行操作
支持的 可以直接按照 GETTING_STARTED.md 中的步骤进行操作
很抱歉又开启此问题,目前采用国产服务器,系统为银河麒麟v10,cpu为两块kunpeng 920 5220,gpu为altas 300v pro,设备驱动已让厂商安装完毕,调用npu-smi info可以打印信息,打印信息如下:
(mindspore) [root@localhost mindyolo]# npu-smi info +--------------------------------------------------------------------------------------------------------+ | npu-smi 22.0.4 Version: 22.0.4 | +-------------------------------+-----------------+------------------------------------------------------+ | NPU Name | Health | Power(W) Temp(C) Hugepages-Usage(page) | | Chip Device | Bus-Id | AICore(%) Memory-Usage(MB) | +===============================+=================+======================================================+ | 2 310P3 | OK | NA 52 0 / 0 | | 0 0 | 0000:02:00.0 | 0 1858 / 44215 | +===============================+=================+======================================================+
采用conda环境安装Ascend310,调用中断指令训练:
python train.py --config ./config/yolov5/yolov5s.yaml
提示如下错误:
`[WARNING] ME(16848:281469675719664,MainProcess):2023-10-12-09:19:19.764.384 [mindspore/run_check/_check_version.py:357] MindSpore version 2.1.1 and Ascend AI software package (Ascend Data Center Solution)version 1.82 does not match, the version of software package expect one of ['6.4']. Please refer to the match info on: https://www.mindspore.cn/install
[WARNING] ME(16848:281469675719664,MainProcess):2023-10-12-09:19:19.764.508 [mindspore/run_check/_check_version.py:460] Can not find the tbe operator implementation(need by mindspore-ascend). Please check whether the Environment Variable PYTHONPATH is set. For details, refer to the installation guidelines: https://www.mindspore.cn/install
[WARNING] ME(16848:281469675719664,MainProcess):2023-10-12-09:19:19.764.568 [mindspore/run_check/_check_version.py:466] Can not find driver so(need by mindspore-ascend). Please check whether the Environment Variable LD_LIBRARY_PATH is set. For details, refer to the installation guidelines: https://www.mindspore.cn/install
Traceback (most recent call last):
File "train.py", line 321, in
train(args)
File "train.py", line 93, in train
set_default(args)
File "/home/mindyolo/mindyolo/mindyolo/utils/utils.py", line 24, in set_default
context.set_context(mode=args.ms_mode, device_target=args.device_target, max_call_depth=2000)
File "/root/miniconda3/envs/mindspore/lib/python3.7/site-packages/mindspore/_checkparam.py", line 1319, in wrapper
return func(*args, **kwargs)
File "/root/miniconda3/envs/mindspore/lib/python3.7/site-packages/mindspore/context.py", line 1371, in set_context
ctx.set_device_target(kwargs['device_target'])
File "/root/miniconda3/envs/mindspore/lib/python3.7/site-packages/mindspore/context.py", line 373, in set_device_target
self.set_param(ms_ctx_param.device_target, target)
File "/root/miniconda3/envs/mindspore/lib/python3.7/site-packages/mindspore/context.py", line 175, in set_param
self._context_handle.set_param(param, value)
RuntimeError: Unsupported device target Ascend. This process only supports one of the ['CPU']. Please check whether the Ascend environment is installed and configured correctly, and check whether current mindspore wheel package was built with "-e Ascend". For details, please refer to "Device load error message".
Device load error message:
Load dynamic library: libmindspore_ascend.so.2 failed. liboptiling.so: cannot open shared object file: No such file or directory
Load dynamic library: libmindspore_ascend.so.1 failed. liboptiling.so: cannot open shared object file: No such file or directory
C++ Call Stack: (For framework developers)
mindspore/core/utils/ms_context.cc:327 SetDeviceTargetFromInner
这是不是mindspore2.1.1与Ascend AI software package 版本不匹配导致的问题? 更换指令:python train.py --config ./config/yolov5/yolov5s.yaml --device_target "CPU"提示如下错误:albumentations: Blur(p=0.01, blur_limit=(3, 7)), MedianBlur(p=0.01, blur_limit=(3, 7)), ToGray(p=0.01), CLAHE(p=0.01, clip_limit=(1, 4.0), tile_grid_size=(8, 8))
albumentations: Blur(p=0.01, blur_limit=(3, 7)), MedianBlur(p=0.01, blur_limit=(3, 7)), ToGray(p=0.01), CLAHE(p=0.01, clip_limit=(1, 4.0), tile_grid_size=(8, 8))
[INFO] albumentations load success
albumentations: Blur(p=0.01, blur_limit=(3, 7)), MedianBlur(p=0.01, blur_limit=(3, 7)), ToGray(p=0.01), CLAHE(p=0.01, clip_limit=(1, 4.0), tile_grid_size=(8, 8))
[INFO] albumentations load success
[INFO] albumentations load success
albumentations: Blur(p=0.01, blur_limit=(3, 7)), MedianBlur(p=0.01, blur_limit=(3, 7)), ToGray(p=0.01), CLAHE(p=0.01, clip_limit=(1, 4.0), tile_grid_size=(8, 8))
[INFO] albumentations load success
[WARNING] ME(6132:281469865118704,MainProcess):2023-10-11-18:14:23.267.112 [mindspore/ops/primitive.py:814] The "use_copy_slice" is a constexpr function. The input arguments must be all constant value.
[WARNING] ME(6132:281469865118704,MainProcess):2023-10-11-18:14:23.290.782 [mindspore/ops/primitive.py:814] The "is_slice" is a constexpr function. The input arguments must be all constant value.
[WARNING] ME(6132:281469865118704,MainProcess):2023-10-11-18:14:23.321.589 [mindspore/ops/primitive.py:814] The "is_slice" is a constexpr function. The input arguments must be all constant value.
[WARNING] ME(6132:281469865118704,MainProcess):2023-10-11-18:14:23.345.797 [mindspore/ops/primitive.py:814] The "is_slice" is a constexpr function. The input arguments must be all constant value.
[WARNING] ME(6132:281469865118704,MainProcess):2023-10-11-18:14:23.370.442 [mindspore/ops/primitive.py:814] The "is_slice" is a constexpr function. The input arguments must be all constant value.
[WARNING] ME(6132:281469865118704,MainProcess):2023-10-11-18:14:23.988.891 [mindspore/ops/primitive.py:814] The "use_copy_slice" is a constexpr function. The input arguments must be all constant value.
[WARNING] ME(6132:281469865118704,MainProcess):2023-10-11-18:14:24.821. [mindspore/ops/primitive.py:814] The "is_slice" is a constexpr function. The input arguments must be all constant value.
[WARNING] ME(6132:281469865118704,MainProcess):2023-10-11-18:14:24.159.68 [mindspore/ops/primitive.py:814] The "is_slice" is a constexpr function. The input arguments must be all constant value.
[WARNING] ME(6132:281469865118704,MainProcess):2023-10-11-18:14:24.368.78 [mindspore/ops/primitive.py:814] The "is_slice" is a constexpr function. The input arguments must be all constant value.
[WARNING] ME(6132:281469865118704,MainProcess):2023-10-11-18:14:24.538.58 [mindspore/ops/primitive.py:814] The "is_slice" is a constexpr function. The input arguments must be all constant value.
[WARNING] ME(6132:281469865118704,MainProcess):2023-10-11-18:14:24.531.467 [mindspore/ops/primitive.py:814] The "use_copy_slice" is a constexpr function. The input arguments must be all constant value.
[WARNING] ME(6132:281469865118704,MainProcess):2023-10-11-18:14:24.540.430 [mindspore/ops/primitive.py:814] The "is_slice" is a constexpr function. The input arguments must be all constant value.
[WARNING] ME(6132:281469865118704,MainProcess):2023-10-11-18:14:24.557.863 [mindspore/ops/primitive.py:814] The "is_slice" is a constexpr function. The input arguments must be all constant value.
[WARNING] ME(6132:281469865118704,MainProcess):2023-10-11-18:14:24.585.289 [mindspore/ops/primitive.py:814] The "is_slice" is a constexpr function. The input arguments must be all constant value.
[WARNING] ME(6132:281469865118704,MainProcess):2023-10-11-18:14:24.599.736 [mindspore/ops/primitive.py:814] The "is_slice" is a constexpr function. The input arguments must be all constant value.
[WARNING] UTILS(6132,fffecf534bf0,python3):2023-10-11-18:14:24.753.422 [mindspore/ccsrc/utils/comm_manager.cc:80] GetInstance] CommManager instance for CPU not found, return default instance.
[ERROR] ANALYZER(6132,fffecf534bf0,python3):2023-10-11-18:14:24.954.369 [mindspore/ccsrc/pipeline/jit/static_analysis/async_eval_result.cc:69] HandleException] Exception happened, check the information as below.
The function call stack (See file '/home/mindyolo/mindyolo/rank_0/om/analyze_fail.ir' for more details. Get instructions about analyze_fail.ir at https://www.mindspore.cn/search?inputValue=analyze_fail.ir):
0 In file /home/mindyolo/mindyolo/mindyolo/utils/train_step_factory.py:77
return train_step_func(*args)
^
1 In file /home/mindyolo/mindyolo/mindyolo/utils/train_step_factory.py:59
if clip_grad:
2 In file /home/mindyolo/mindyolo/mindyolo/utils/train_step_factory.py:54
(loss, loss_items), grads = grad_fn(x, label)
^
3 In file /root/miniconda3/envs/mindspore/lib/python3.7/site-packages/mindspore/ops/composite/base.py:584
return grad_(fn, weights)(*args)
^
4 In file /home/mindyolo/mindyolo/mindyolo/utils/train_step_factory.py:47
loss, loss_items = loss_fn(pred, label, x)
^
5 In file /home/mindyolo/mindyolo/mindyolo/models/losses/yolov5_loss.py:88
for layer_index, pi in enumerate(p): # layer index, layer predictions
^
6 In file /home/mindyolo/mindyolo/mindyolo/utils/train_step_factory.py:47
loss, loss_items = loss_fn(pred, label, x)
^
7 In file /home/mindyolo/mindyolo/mindyolo/models/losses/yolov5_loss.py:88
for layer_index, pi in enumerate(p): # layer index, layer predictions
^
8 In file /home/mindyolo/mindyolo/mindyolo/utils/train_step_factory.py:47
loss, loss_items = loss_fn(pred, label, x)
^
9 In file /home/mindyolo/mindyolo/mindyolo/models/losses/yolov5_loss.py:88
for layer_index, pi in enumerate(p): # layer index, layer predictions
^
10 In file /home/mindyolo/mindyolo/mindyolo/utils/train_step_factory.py:47
loss, loss_items = loss_fn(pred, label, x)
^
11 In file /home/mindyolo/mindyolo/mindyolo/models/losses/yolov5_loss.py:88
for layer_index, pi in enumerate(p): # layer index, layer predictions
^
12 In file /home/mindyolo/mindyolo/mindyolo/models/losses/yolov5_loss.py:133
loss_item = ops.stop_gradient(ops.stack((loss, lbox, lobj, lcls)))
^
13 In file /root/miniconda3/envs/mindspore/lib/python3.7/site-packages/mindspore/ops/function/array_func.py:2207
return _stack(tensors)
^
Traceback (most recent call last):
File "train.py", line 321, in
train(args)
File "train.py", line 293, in train
profiler_step_num=args.profiler_step_num
File "/home/mindyolo/mindyolo/mindyolo/utils/trainer_factory.py", line 171, in train
cur_step=cur_step,cur_epoch=cur_epoch)
File "/home/mindyolo/mindyolo/mindyolo/utils/trainer_factory.py", line 366, in train_step
loss, loss_item, _, grads_finite = self.train_step_fn(imgs, labels, True)
File "/root/miniconda3/envs/mindspore/lib/python3.7/site-packages/mindspore/common/api.py", line 807, in staging_specialize
out = _MindsporeFunctionExecutor(func, hash_obj, input_signature, process_obj, jit_config)(*args, **kwargs)
File "/root/miniconda3/envs/mindspore/lib/python3.7/site-packages/mindspore/common/api.py", line 106, in wrapper
results = fn(*arg, **kwargs)
File "/root/miniconda3/envs/mindspore/lib/python3.7/site-packages/mindspore/common/api.py", line 526, in call
raise err
File "/root/miniconda3/envs/mindspore/lib/python3.7/site-packages/mindspore/common/api.py", line 523, in call
phase = self.compile(self.fn.name, *args_list, **kwargs)
File "/root/miniconda3/envs/mindspore/lib/python3.7/site-packages/mindspore/common/api.py", line 599, in compile
is_compile = self._graph_executor.compile(self.fn, compile_args, kwargs, phase, True)
File "/root/miniconda3/envs/mindspore/lib/python3.7/site-packages/mindspore/ops/operations/array_ops.py", line 3108, in infer
all_shape = _get_stack_shape(value, x_shape, x_type, self.axis, self.name)
File "/root/miniconda3/envs/mindspore/lib/python3.7/site-packages/mindspore/ops/operations/array_ops.py", line 3002, in _get_stack_shape
raise TypeError("For '{}', all types should be same, but got {}".format(prim_name, x_type))
TypeError: For 'Stack', all types should be same, but got (mindspore.tensor[float32], mindspore.tensor[float32], mindspore.tensor[float32], mindspore.float32)
Error in atexit._run_exitfuncs:
Traceback (most recent call last):
File "/root/miniconda3/envs/mindspore/lib/python3.7/multiprocessing/util.py", line 357, in _exit_function
p.join()
File "/root/miniconda3/envs/mindspore/lib/python3.7/multiprocessing/process.py", line 137, in join
self._check_closed()
File "/root/miniconda3/envs/mindspore/lib/python3.7/multiprocessing/process.py", line 92, in _check_closed
raise ValueError("process object is closed")
ValueError: process object is closed
只按照最初的提问的第二点修改了代码:self.imgIds = [index for index in range(len(self.img_files))]`
其余未作改动,请问鲲鹏920和altas 300v pro如果用于训练需要按照哪个版本的mindspore?以及有没有其余的注意事项或可以参考的资料信息?服务器厂商暂不明确mindspore使用方式,相关论坛我已提问,但仍未回复,期待mindyolo组的回复,感谢~
可以尝试按照MindSpore官网方式安装对应版本 https://www.mindspore.cn/install,然后运行如下代码确认MindSpore安装成功
import mindspore as ms
ms.run_check()
可以尝试按照MindSpore官网方式安装对应版本 https://www.mindspore.cn/install,然后运行如下代码确认MindSpore安装成功 import mindspore as ms ms.run_check()
已按照要求安装mindspore Ascend310版本,输入代码:
import mindspore as ms ms.run_check()
提示如下信息:
MindSpore version: 2.1.1 The result of multiplication calculation is correct, MindSpore has been installed on platform [CPU] successfully!
鲲鹏920和altas 300v pro只能使用cpu训练么?不可以用”Ascend“或”CPU“吗?
可以尝试按照MindSpore官网方式安装对应版本 https://www.mindspore.cn/install,然后运行如下代码确认MindSpore安装成功 import mindspore as ms ms.run_check()
已安装官网要求,安装最新版本的mindyolo2.1.1,但是运行run_check()出现以下问题:
MindSpore version: 2.1.1 The result of multiplication calculation is correct, MindSpore has been installed on platform [Ascend] successfully! [ERROR] KERNEL(40354,fffdd6e1f1e0,python):2023-10-13-09:30:47.546.313 [mindspore/ccsrc/kernel/oplib/op_info_utils.cc:172] LoadOpInfoJson] Get op info json suffix path failed, soc_version: Ascend310P3 [ERROR] KERNEL(40354,fffdd6e1f1e0,python):2023-10-13-09:30:47.546.421 [mindspore/ccsrc/kernel/oplib/op_info_utils.cc:111] GenerateOpInfos] Load op info json failed, version: Ascend310P3
运行mindyolo:
python demo/predict.py --config ./configs/yolov5/yolov5s.yaml --weight ./yolov5s.ckpt --image_path ./bus.jpg
爆出以下问题:
`2023-10-13 09:26:25,217 [WARNING] Parse Model, args: nearest, keep str type
2023-10-13 09:26:25,266 [WARNING] Parse Model, args: nearest, keep str type
2023-10-13 09:26:25,412 [INFO] number of network params, total: 7.254436M, trainable: 7.235389M
[ERROR] KERNEL(40106,fffd2d4e4340,python):2023-10-13-09:26:25.462.153 [mindspore/ccsrc/kernel/oplib/op_info_utils.cc:172] LoadOpInfoJson] Get op info json suffix path failed, soc_version: Ascend310P3
[ERROR] KERNEL(40106,fffd2d4e4340,python):2023-10-13-09:26:25.462.263 [mindspore/ccsrc/kernel/oplib/op_info_utils.cc:111] GenerateOpInfos] Load op info json failed, version: Ascend310P3
Traceback (most recent call last):
File "demo/predict.py", line 333, in
infer(args)
File "demo/predict.py", line 280, in infer
checkpoint_path=args.weight,
File "/home/mindyolo/mindyolo/mindyolo/models/model_factory.py", line 31, in create_model
model = create_fn(**model_args, **kwargs)
File "/home/mindyolo/mindyolo/mindyolo/models/yolov5.py", line 47, in yolov5
model = YOLOv5(cfg=cfg, in_channels=in_channels, num_classes=num_classes, **kwargs)
File "/home/mindyolo/mindyolo/mindyolo/models/yolov5.py", line 32, in init
self.initialize_weights()
File "/home/mindyolo/mindyolo/mindyolo/models/yolov5.py", line 41, in initialize_weights
m.initialize_biases()
File "/home/mindyolo/mindyolo/mindyolo/models/heads/yolov5_head.py", line 67, in initialize_biases
for mi, s in zip(m.m, m.stride): # from
File "/root/miniconda3/envs/ms2.1/lib/python3.7/site-packages/mindspore/common/tensor.py", line 456, in getitem
out = tensor_operator_registry.get('getitem')(self, index)
File "/root/miniconda3/envs/ms2.1/lib/python3.7/site-packages/mindspore/ops/composite/multitype_ops/_compile_utils.py", line 183, in _tensor_getitem
return data_update(tensor_update_types, tensor_update_args, self, new_index)
File "/root/miniconda3/envs/ms2.1/lib/python3.7/site-packages/mindspore/ops/composite/multitype_ops/_compile_utils.py", line 93, in data_update
data = data_update_by_ops(transfer_type, arg, data, new_index, value)
File "/root/miniconda3/envs/ms2.1/lib/python3.7/site-packages/mindspore/ops/composite/multitype_ops/_compile_utils.py", line 125, in data_update_by_ops
mask_index[0], mask_index[1], 0, 0, mask_index[2])
File "/root/miniconda3/envs/ms2.1/lib/python3.7/site-packages/mindspore/ops/composite/multitype_ops/compile_utils.py", line 52, in strided_slice
return strided_slice(data, begin_strides, end_strides, step_strides)
File "/root/miniconda3/envs/ms2.1/lib/python3.7/site-packages/mindspore/ops/primitive.py", line 314, in call
return _run_op(self, self.name, args)
File "/root/miniconda3/envs/ms2.1/lib/python3.7/site-packages/mindspore/ops/primitive.py", line 907, in _run_op
stub = _pynative_executor.run_op_async(obj, op_name, args)
File "/root/miniconda3/envs/ms2.1/lib/python3.7/site-packages/mindspore/common/api.py", line 1275, in run_op_async
return self._executor.run_op_async(*args)
RuntimeError: Load op info form json config failed, version: Ascend310P3
C++ Call Stack: (For framework developers)
mindspore/ccsrc/plugin/device/ascend/hal/device/ascend_kernel_runtime.cc:431 Init
`
是mindspore和Ascend310P3不匹配吗?
可以尝试按照MindSpore官网方式安装对应版本 https://www.mindspore.cn/install,然后运行如下代码确认MindSpore安装成功 import mindspore as ms ms.run_check()
已按照要求安装mindspore Ascend310版本,输入代码: import mindspore as ms ms.run_check() 提示如下信息: MindSpore version: 2.1.1 The result of multiplication calculation is correct, MindSpore has been installed on platform [CPU] successfully! 鲲鹏920和altas 300v pro只能使用cpu训练么?不可以用”Ascend“或”CPU“吗?
当前库上代码暂时还不支持ascend 310训练
目前租用modelarts服务器进行模型训练,现在的核心是asced910B,运行mindspore.run_check(),结果返回如下:
MindSpore version: 2.0.0 The result of multiplication calculation is correct, MindSpore has been installed on platform [Ascend] successfully!
设备可以使用,但是在执行:
python train.py --config ./configs/yolov5/yolov5s.yaml --device_target Ascend
提示如下错误:
Traceback (most recent call last): File "train.py", line 321, in <module> train(args) File "train.py", line 293, in train profiler_step_num=args.profiler_step_num File "/home/ma-user/work/mindyolo/mindyolo/utils/trainer_factory.py", line 171, in train cur_step=cur_step,cur_epoch=cur_epoch) File "/home/ma-user/work/mindyolo/mindyolo/utils/trainer_factory.py", line 366, in train_step loss, loss_item, _, grads_finite = self.train_step_fn(imgs, labels, True) File "/home/ma-user/anaconda3/envs/MindSpore/lib/python3.7/site-packages/mindspore/common/api.py", line 610, in staging_specialize out = _MindsporeFunctionExecutor(func, hash_obj, input_signature, process_obj, jit_config)(*args, **kwargs) File "/home/ma-user/anaconda3/envs/MindSpore/lib/python3.7/site-packages/mindspore/common/api.py", line 102, in wrapper results = fn(*arg, **kwargs) File "/home/ma-user/anaconda3/envs/MindSpore/lib/python3.7/site-packages/mindspore/common/api.py", line 332, in __call__ raise err File "/home/ma-user/anaconda3/envs/MindSpore/lib/python3.7/site-packages/mindspore/common/api.py", line 329, in __call__ phase = self.compile(self.fn.__name__, *args_list, **kwargs) File "/home/ma-user/anaconda3/envs/MindSpore/lib/python3.7/site-packages/mindspore/common/api.py", line 406, in compile is_compile = self._graph_executor.compile(self.fn, compile_args, kwargs, phase, True) File "/home/ma-user/anaconda3/envs/MindSpore/lib/python3.7/site-packages/mindspore/ops/primitive.py", line 819, in __infer__ return {'dtype': None, 'shape': None, 'value': fn(*value_args)} File "/home/ma-user/anaconda3/envs/MindSpore/lib/python3.7/site-packages/mindspore/ops/composite/multitype_ops/_constexpr_utils.py", line 405, in slice2indices [grid.size if j == t else 1 for t in range(ndim)]))) File "/home/ma-user/anaconda3/envs/MindSpore/lib/python3.7/site-packages/mindspore/ops/composite/multitype_ops/_constexpr_utils.py", line 405, in <listcomp> [grid.size if j == t else 1 for t in range(ndim)]))) File "/home/ma-user/anaconda3/envs/MindSpore/lib/python3.7/site-packages/mindspore/common/_stub_tensor.py", line 100, in size shape = self.shape File "/home/ma-user/anaconda3/envs/MindSpore/lib/python3.7/site-packages/mindspore/common/_stub_tensor.py", line 84, in shape self.stub_shape = self.stub.get_shape() RuntimeError: Sync stream error!
并伴随如下信息:
Ascend Error Message: E39999: Inner Error! E39999 An exception occurred during AICPU execution, stream_id:7, task_id:633, errcode:11002, msg:open so failed[FUNC:ProcessAicpuErrorInfo][FILE:device_error_proc.cc][LINE:702] TraceBack (most recent call last): Kernel task happen error, retCode=0x2a, [aicpu exception].[FUNC:PreCheckTaskErr][FILE:task.cc][LINE:1163] Aicpu kernel execute failed, device_id=0, stream_id=7, task_id=633, errorCode=2a.[FUNC:PrintAicpuErrorInfo][FILE:task.cc][LINE:862] Aicpu kernel execute failed, device_id=0, stream_id=7, task_id=633, fault op_name=[FUNC:GetError][FILE:stream.cc][LINE:1133] rtStreamSynchronize execute failed, reason=[aicpu exception][FUNC:FuncErrorReason][FILE:error_message_manage.cc][LINE:49] (Please search "Ascend Error Message" at https://www.mindspore.cn for error code description) C++ Call Stack: (For framework developers) mindspore/ccsrc/plugin/device/ascend/hal/device/ascend_device_address.cc:188 SyncStream
请问如下错误产生的原因是什么?是云服务平台的问题吗?
看报错像是在图编译阶段出现的问题,mindyolo当前还未适配910B机器,还有看到你上面打印的ms版本是2.0这个版本应该也还没适配910B,具体版本关系可以参考官网
自定义一个车辆检测的训练数据集,为yolo格式,在yolov5-6.2框架上训练无问题,但今日使用mindyolo进行训练出现一定问题。 本机平台平台ubuntu22.04,显卡3080ti,cuda12.0,采用docker方式拉取cuda11.6版本的mindspore镜像,验证GPU版本安装成功,而后安装mindyolo。 训练指令: python train.py --config ./config/yolov5/yolov5s.yaml --device_target GPU 1.ValueError: invalid literal for int() with base 10:"xxxxxx" 对应错误位置:/mindyolo/mindyolo/data/dataset.py Line 198 self.imgIds = [int(Path(im_file).stem) for im_file in self.img_files] 由于采用自定义数据集,图片并非按照 int value.jpg 的格式保存,故改为 self.imgIds = [index for index in range(len(self.img_files))] 不知这样修改是否对后续产生影响。 2.is_slice的warning,具体如下: [WARNING] ME(3227:140445071247168,MainProcess):2023-09-27-07:13:00.606.995 [mindspore/ops/primitive.py:814] The "use_copy_slice" is a constexpr fune all constant value. [WARNING] ME(3227:140445071247168,MainProcess):2023-09-27-07:13:00.644.340 [mindspore/ops/primitive.py:814] The "is_slice" is a constexpr function.constant value. [WARNING] ME(3227:140445071247168,MainProcess):2023-09-27-07:13:00.812.601 [mindspore/ops/primitive.py:814] The "use_copy_slice" is a constexpr fune all constant value. [WARNING] ME(3227:140445071247168,MainProcess):2023-09-27-07:13:00.824.830 [mindspore/ops/primitive.py:814] The "is_slice" is a constexpr function.constant value. [WARNING] ME(3227:140445071247168,MainProcess):2023-09-27-07:13:00.836.749 [mindspore/ops/primitive.py:814] The "is_slice" is a constexpr function.constant value. [WARNING] ME(3227:140445071247168,MainProcess):2023-09-27-07:13:00.850.231 [mindspore/ops/primitive.py:814] The "is_slice" is a constexpr function.constant value. [WARNING] ME(3227:140445071247168,MainProcess):2023-09-27-07:13:00.865.217 [mindspore/ops/primitive.py:814] The "is_slice" is a constexpr function.constant value. [WARNING] ME(3227:140445071247168,MainProcess):2023-09-27-07:13:01.195.152 [mindspore/ops/primitive.py:814] The "use_copy_slice" is a constexpr fune all constant value. [WARNING] ME(3227:140445071247168,MainProcess):2023-09-27-07:13:01.200.028 [mindspore/ops/primitive.py:814] The "is_slice" is a constexpr function.constant value. [WARNING] ME(3227:140445071247168,MainProcess):2023-09-27-07:13:01.207.418 [mindspore/ops/primitive.py:814] The "is_slice" is a constexpr function.constant value. [WARNING] ME(3227:140445071247168,MainProcess):2023-09-27-07:13:01.283.436 [mindspore/ops/primitive.py:814] The "is_slice" is a constexpr function.constant value. [WARNING] ME(3227:140445071247168,MainProcess):2023-09-27-07:13:01.291.933 [mindspore/ops/primitive.py:814] The "is_slice" is a constexpr function.constant value. [WARNING] ME(3227:140445071247168,MainProcess):2023-09-27-07:13:01.582.653 [mindspore/ops/primitive.py:814] The "use_copy_slice" is a constexpr fune all constant value. [WARNING] ME(3227:140445071247168,MainProcess):2023-09-27-07:13:01.587.358 [mindspore/ops/primitive.py:814] The "is_slice" is a constexpr function.constant value. [WARNING] ME(3227:140445071247168,MainProcess):2023-09-27-07:13:01.594.711 [mindspore/ops/primitive.py:814] The "is_slice" is a constexpr function.constant value. [WARNING] ME(3227:140445071247168,MainProcess):2023-09-27-07:13:01.602.830 [mindspore/ops/primitive.py:814] The "is_slice" is a constexpr function.constant value. [WARNING] ME(3227:140445071247168,MainProcess):2023-09-27-07:13:01.609.911 [mindspore/ops/primitive.py:814] The "is_slice" is a constexpr function.constant value. 暂不明确warning产生原因。 3.训练的损失为nan,具体情况如下: 2023-09-27 09:05:09,329 [INFO] Epoch 6/300, Step 200/510, step time: 1095.86 ms 2023-09-27 09:05:20,349 [WARNING] overflow, still update, loss scale adjust to 1024.0 2023-09-27 09:05:20,355 [INFO] Epoch 6/300, Step 210/510, imgsize (640, 640), loss: nan, lbox: nan, lobj: nan, lcls: nan, cur_lr: 0.0009835000382736325 该情况在yolov5官方训练上并没出现错误,不知产生的原因,train.py设置了clip_grad=True
mindsopre是一款出色的深度学习框架,同时mindyolo极大的提高了mindspore的工业应用价值,目前采用nvidia GPU进行训练,在问题解决后将采用Ascend平台进行训练,从训练时间来看,mindyolo提供的yolov5其训练速度远快于yolov5 pytorch(相同的batch_size,optimizer等参数),但目前尚存一些问题需要与相关人员沟通,希望的到官方的回复,感谢!
第一个问题直接按照这种修改可以吗 @zhanghuiyao
“第一个问题”是啥? @lonngxiang
python train.py --config ./config/yolov5/yolov5s.yaml --device_target GPU
1.ValueError: invalid literal for int() with base 10:"xxxxxx"
对应错误位置:/mindyolo/mindyolo/data/dataset.py Line 198
self.imgIds = [int(Path(im_file).stem) for im_file in self.img_files]
由于采用自定义数据集,图片并非按照 int value.jpg 的格式保存,故改为
self.imgIds = [index for index in range(len(self.img_files))]
不知这样修改是否对后续产生影响。
python train.py --config ./config/yolov5/yolov5s.yaml --device_target GPU
1.ValueError: invalid literal for int() with base 10:"xxxxxx"
对应错误位置:/mindyolo/mindyolo/data/dataset.py Line 198
self.imgIds = [int(Path(im_file).stem) for im_file in self.img_files]
由于采用自定义数据集,图片并非按照 int value.jpg 的格式保存,故改为
self.imgIds = [index for index in range(len(self.img_files))]
不知这样修改是否对后续产生影响。
“第一个问题”是啥? @lonngxiang
python train.py --config ./config/yolov5/yolov5s.yaml --device_target GPU
1.ValueError: invalid literal for int() with base 10:"xxxxxx"
对应错误位置:/mindyolo/mindyolo/data/dataset.py Line 198
self.imgIds = [int(Path(im_file).stem) for im_file in self.img_files]
由于采用自定义数据集,图片并非按照 int value.jpg 的格式保存,故改为
self.imgIds = [index for index in range(len(self.img_files))]
不知这样修改是否对后续产生影响。
“第一个问题”是啥? @lonngxiang
python train.py --config ./config/yolov5/yolov5s.yaml --device_target GPU 1.ValueError: invalid literal for int() with base 10:"xxxxxx" 对应错误位置:/mindyolo/mindyolo/data/dataset.py Line 198 self.imgIds = [int(Path(im_file).stem) for im_file in self.img_files] 由于采用自定义数据集,图片并非按照 int value.jpg 的格式保存,故改为 self.imgIds = [index for index in range(len(self.img_files))] 不知这样修改是否对后续产生影响。
这个的影响范围应该只会在数据集中
| gharchive/issue | 2023-09-27T09:10:54 | 2025-04-01T06:45:00.765539 | {
"authors": [
"SoulProficiency",
"lonngxiang",
"zhanghuiyao"
],
"repo": "mindspore-lab/mindyolo",
"url": "https://github.com/mindspore-lab/mindyolo/issues/217",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
87087250 | Long usernames
Our CAS server is returning names longer than the 30 characters in some instances (our source auth is kerberos which allows 256 chars), as a result we get stack traces when using django-cas-ng of this sort:
Traceback (most recent call last):
File "/app/.heroku/python/lib/python2.7/site-packages/django/core/handlers/base.py", line 132, in get_response
response = wrapped_callback(request, *callback_args, **callback_kwargs)
File "/app/.heroku/python/lib/python2.7/site-packages/django_cas_ng/views.py", line 103, in login
user = authenticate(ticket=ticket, service=service, request=request)
File "/app/.heroku/python/lib/python2.7/site-packages/django/contrib/auth/__init__.py", line 74, in authenticate
user = backend.authenticate(**credentials)
File "/app/.heroku/python/lib/python2.7/site-packages/django_cas_ng/backends.py", line 256, in authenticate
user = User.objects.create_user(username, '')
File "/app/.heroku/python/lib/python2.7/site-packages/django/contrib/auth/models.py", line 187, in create_user
**extra_fields)
File "/app/.heroku/python/lib/python2.7/site-packages/django/contrib/auth/models.py", line 182, in _create_user
user.save(using=self._db)
File "/app/.heroku/python/lib/python2.7/site-packages/django/db/models/base.py", line 710, in save
force_update=force_update, update_fields=update_fields)
File "/app/.heroku/python/lib/python2.7/site-packages/django/db/models/base.py", line 738, in save_base
updated = self._save_table(raw, cls, force_insert, force_update, using, update_fields)
File "/app/.heroku/python/lib/python2.7/site-packages/django/db/models/base.py", line 822, in _save_table
result = self._do_insert(cls._base_manager, using, fields, update_pk, raw)
File "/app/.heroku/python/lib/python2.7/site-packages/django/db/models/base.py", line 861, in _do_insert
using=using, raw=raw)
File "/app/.heroku/python/lib/python2.7/site-packages/django/db/models/manager.py", line 127, in manager_method
return getattr(self.get_queryset(), name)(*args, **kwargs)
File "/app/.heroku/python/lib/python2.7/site-packages/django/db/models/query.py", line 920, in _insert
return query.get_compiler(using=using).execute_sql(return_id)
File "/app/.heroku/python/lib/python2.7/site-packages/django/db/models/sql/compiler.py", line 974, in execute_sql
cursor.execute(sql, params)
File "/app/.heroku/python/lib/python2.7/site-packages/django/db/backends/utils.py", line 64, in execute
return self.cursor.execute(sql, params)
File "/app/.heroku/python/lib/python2.7/site-packages/django/db/utils.py", line 97, in __exit__
six.reraise(dj_exc_type, dj_exc_value, traceback)
File "/app/.heroku/python/lib/python2.7/site-packages/django/db/backends/utils.py", line 64, in execute
return self.cursor.execute(sql, params)
DataError: value too long for type character varying(30)
Would you be interested in a patch to do some automated truncation of some sort in this project, or am I better off trying a custom user model, or patching the user.username field?
The User.username belong to Django, we should not modify it.
It better to use a custom User model since the username need keep unique. If truncate it may potential conflict.
Cool, thanks for the consideration
| gharchive/issue | 2015-06-10T19:57:49 | 2025-04-01T06:45:00.839191 | {
"authors": [
"carsongee",
"mingchen"
],
"repo": "mingchen/django-cas-ng",
"url": "https://github.com/mingchen/django-cas-ng/issues/44",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
1510206120 | 🛑 MineVN Website is down
In b6d8952, MineVN Website (http://minevn.net) was down:
HTTP code: 0
Response time: 0 ms
Resolved: MineVN Website is back up in 8e45c68.
| gharchive/issue | 2022-12-24T23:09:40 | 2025-04-01T06:45:00.847612 | {
"authors": [
"minhh2792"
],
"repo": "minhh2792/uptime",
"url": "https://github.com/minhh2792/uptime/issues/886",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
135980884 | Please, add possibility to save one collection as multible documents
Hi,
your wish for feedback is my command :)
Please add the possibility to save one collection as multible documents.
What does it mean and why do I need it.
At the moment you lib saves one collection let's say:
var test = PersistentList< Report >.Create(new SaveOnlyWhenRequested());
test.Add(new Models.Report { Name = "Test 1" });
test.Add(new Models.Report { Name = "Test 2" });
test.Save();
to "Report.jss".
This is OK, but it's very bad for collaboration on Git, SVN or something.
If I just change one Report, the hole collection have to be merged.
If now 10 people change multible reports and all checkin, the merging can get very hard!
If you could add the possibility to say:
Save all class with type "Report" into directory "Reports" with some generated name (Easiest GUID, for the future maybe with the Id/Key Property of the class)
That would be awesome!
Kind regards
Christian
I got you.
But why are you uploading user data to github?
In my case it is not github, it is a company internal SVN Repo.
Can you implement this, than I'm 100% satisfied :)
| gharchive/issue | 2016-02-24T07:24:27 | 2025-04-01T06:45:00.851526 | {
"authors": [
"SharpNoiZy",
"andrecarlucci"
],
"repo": "mini-biggy/mini-biggy",
"url": "https://github.com/mini-biggy/mini-biggy/issues/2",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
108480207 | [FR] Edit / delete groups
Great job!
Miniflux is my favourite anti-procrastination tool, been using it on and off for a while (since I discovered KB) !
Makes daily routines on the internet much more efficient.
Recently updated the Miniflux version on my server.
The groups-function (for subscriptions) is nice - but I made the mistake of entering a typo when establishing a group - and in another group I changed my mind and deleted all the feeds for that group, so now the group still exists with no feeds to it.
I can't find a way to delete or edit groups from the user interface
(realize I could attack the .sqlite db and edit there - but as an FR, perhaps it should be possible to edit or delete groups from the UI)
I can't find a way to delete groups from the user interface
Empty groups should be removed automatically, as soon as you remove the group association from the last feed that is member of the group. If you unsubscribe the last feed which is member of a group, you need to assign/remove a group to any other feed, to trigger the cleanup mechanism.
Would you please verify that it's working this way!
I can't find a way to edit groups from the user interface
As co-author of the group feature, I have to say it's intentional. The idea was to keep this feature as small as possible. Which means, no UI for task that I am of the opinion that they are rarely used. You can still create a new group, add your feeds to this new group and remove them from the old group.
@mkresin
Thank you for explaining the intended function of the groups.
*I like the simplicity.
Did the following:
Reviewed all my feeds, to verify that none of them are member of the "ghost" group (that contains no feeds)
Established a new feed and made it member of the unwanted "ghost" group
Removed the new feed
"Ghost" group is still alive and well, with no active feeds
Well, I can not reproduce the issue. My "ghost" group gets removed.
Would it be possible to get a copy of your database?
@mkresin
Now I am unable to reproduce it myself, tested twice - works perfect, empty group disappears.
I'm closing this issue and taking back the suggested FR since I support the thought and simplicity that is built into the original design of the groups-feature.
| gharchive/issue | 2015-09-26T17:00:16 | 2025-04-01T06:45:00.859444 | {
"authors": [
"mkresin",
"sparkles645"
],
"repo": "miniflux/miniflux",
"url": "https://github.com/miniflux/miniflux/issues/414",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
253773988 | Modifying the Java build file to handle newly added MintLogger.java
Modifying the Java build file to handle newly added MintLogger.java file, related to #597 update Minio-java to output new mint log format
Should be merged after minio-java release 3.0.7
@kannappanr if this PR is not applicable, then close it
Closing this PR as https://github.com/minio/mint/pull/128 will handle the changes intended by this PR.
| gharchive/pull-request | 2017-08-29T19:18:44 | 2025-04-01T06:45:01.000150 | {
"authors": [
"NitishT",
"balamurugana",
"kannappanr"
],
"repo": "minio/mint",
"url": "https://github.com/minio/mint/pull/124",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2190941428 | minimize: turn functions using FnCtxt into methods
Much simpler than I had first anticipated.
This PR turns all minimize functions that use FnCtxt into methods instead.
I'm unsure how to handle the imports now (use bb::*). So far I deleted them to compile without warning but now the mod bb stand unused around.
closes #159
That... should have been extremely unlikely...
Let's try again.^^
| gharchive/pull-request | 2024-03-17T23:43:13 | 2025-04-01T06:45:01.015365 | {
"authors": [
"RalfJung",
"bifbof"
],
"repo": "minirust/minirust",
"url": "https://github.com/minirust/minirust/pull/165",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
249370091 | Cors
Adds cors to the platform
Done wrong
| gharchive/pull-request | 2017-08-10T14:41:38 | 2025-04-01T06:45:01.016143 | {
"authors": [
"jorgemoralespou"
],
"repo": "minishift/minishift-addons",
"url": "https://github.com/minishift/minishift-addons/pull/7",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
2618458589 | ✨ Build RStudio image
User Story
As an Analytical Platform engineer
I want to release one single image of RStudio that covers functionality 🤷 <to be defined>
So that users have one RStudio release to use and we have one RStudio release to support
Value / Purpose
We want to offer a minimal JupyterLab image, based on the Analytical Platform Cloud Development Environment Base Image, for users of the Analytical Platform.
Useful Contacts
@jacobwoffenden @Gary-H9
User Types
Analytical Platform Users
Proposal
Build a RStudio offering from scratch like we do for Visual Studio Code, allowing us to understand and control the entire offering.
Subject to user research this image may need other functionality.
Additional Information
I think this task can be broken down into two parts:
a) Install R - as per here. - already done in the APCBDE image here. ✅
b) Install RStudio - as per here.
Information from the old repository should also be considered as part of this work as (presumably) challenges which were met there were overcome, we should try and avoid duplicating previous efforts.
RStudio doesn't seem to do releases as we'd expect and rather releases tarballs based on tags (I think?).
Definition of Done
Definition of Done
[ ] Create a restricted release within prod environment for us to test
[ ] Private beta?
[ ] Make the release generally available
[ ] Adopt deprecation plan which is in place for JupyterLab
[ ] Send comms to users about the upgrade
Moving to blocked while we figure out a rollout plan
Moving to backlog, will be picked in 2025
| gharchive/issue | 2024-10-28T13:39:29 | 2025-04-01T06:45:01.031924 | {
"authors": [
"Gary-H9",
"jacobwoffenden"
],
"repo": "ministryofjustice/analytical-platform",
"url": "https://github.com/ministryofjustice/analytical-platform/issues/5861",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
956612542 | Testable 1736 styling edit branch page
Some basic HTML, CSS, and JS structure for what's currently in the page.
Expect this to change and evolve over time, but it's a starting point.
@stevenburnell-moj is there a reason to use data-index-conditional over data-conditional-index?
| gharchive/pull-request | 2021-07-30T10:19:04 | 2025-04-01T06:45:01.037951 | {
"authors": [
"brenetic",
"stevenburnell-moj"
],
"repo": "ministryofjustice/fb-editor",
"url": "https://github.com/ministryofjustice/fb-editor/pull/608",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2006756415 | Include CopyObject in our custom cloudtrail
We want to have a record of all writes into our s3 buckets. This allows us to monitor flow of data through the conformance zones.
Currently we log PutObject, but in the landing-to-raw lambda we copy files from landing to the data bucket, using a raw/ or fail/ prefix.
I think for these events to be logged, we need to include the CopyObject event name.
This is still not working as expected:
matt.moore@MJ004284 putObjectLogs % jq '.Records[] | .requestParameters.key' < 013433889002_CloudTrail_eu-west-2_20231122T1705Z_G0FcID8VlmfFRz48.json
"landing/example_prison_data_product/v1/testing/load_timestamp=20231122T170313Z/d7169be7-7a40-44c2-b83d-7ab2c8d26e30.csv"
"curated/example_prison_data_product/v1/testing/extraction_timestamp=20231122T170313Z/20231122_170326_00012_vg7ew_c0d448a1-9b08-42a4-b18f-98f44fe33b77"
| gharchive/pull-request | 2023-11-22T16:58:11 | 2025-04-01T06:45:01.039488 | {
"authors": [
"MatMoore"
],
"repo": "ministryofjustice/modernisation-platform-environments",
"url": "https://github.com/ministryofjustice/modernisation-platform-environments/pull/4122",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
150310789 | 284 validate context
@marcincichon @RobertWDLowe Please can you review?
Summary of changes
Normalize output of all context-related methods to return slugs as identifiers for agencies. Previously the method Agency_Context::current_user_available_agencies() was returning an array of WP_Term objects. Also update uses of this method to expect the new return type.
Agency_Context::set_agency_context($agency) now checks that the current user is allowed to change to the specified $agency. If not, the method will return a WP_Error object.
Handle the WP_Error object and show the error message to the user if they're trying to change to a context they don't have permission for.
Agency_Context::get_agency_context($agency) will now check that the context stored for the current user is still valid – i.e. if they're still allowed access to the agency. If not, it'll set it to one that they do have access to before returning.
Move Agency_Context and Agency_Editor classes into new directory inc/utilities.
@ollietreend tested. Looks good.
The code looks good, @ollietreend
| gharchive/pull-request | 2016-04-22T08:50:09 | 2025-04-01T06:45:01.046167 | {
"authors": [
"RobertWDLowe",
"marcincichon",
"ollietreend"
],
"repo": "ministryofjustice/mojintranet-theme",
"url": "https://github.com/ministryofjustice/mojintranet-theme/pull/314",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
1643734778 | LPAL-1184 Use PHP 8.1 for all composer updates
Purpose
LPAL-1184
Approach
Force PHP 8.1 in an attempt to make Renovate stop using incorrect PHP versions when updating composer.json.
Learning
https://github.com/renovatebot/renovate/issues/2355#issuecomment-1452830039
Checklist
[X] I have performed a self-review of my own code
[X] I have updated documentation (Confluence/GitHub wiki/tech debt doc) where relevant
[X] I have added tests to prove my work
[X] I have added mandatory tags to terraformed resources, where possible
[X] If I have a new OPG component dependency, I have updated the metadata.json with the repo location.
[X] If I added a package.json or composer.json, I also made sure this is included in the script in .github/workflows/dependabot-update.yml
[ ] The product team have tested these changes
config.platform.php specifies the exact version of PHP to use. This should hopefully prevent psalm from using PHP 8.1. The range in require.php should still allow Renovate to use PHP 8.1 to run its composer updates.
| gharchive/pull-request | 2023-03-28T11:22:07 | 2025-04-01T06:45:01.052007 | {
"authors": [
"townxelliot"
],
"repo": "ministryofjustice/opg-lpa",
"url": "https://github.com/ministryofjustice/opg-lpa/pull/1457",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1125795026 | github codeowners
Update for repository standards:
This option affects a pull request, i.e a PR will need to be reviewed and approved by a CODEOWNER before it can be merged.
Draft: Pending chat with CoP
| gharchive/pull-request | 2022-02-07T10:27:37 | 2025-04-01T06:45:01.053194 | {
"authors": [
"philip-milne"
],
"repo": "ministryofjustice/prisoner-contact-registry",
"url": "https://github.com/ministryofjustice/prisoner-contact-registry/pull/15",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1489521813 | 🛑 Ralsei is down
In 01142be, Ralsei (ralsei.nightcore.monster) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Ralsei is back up in 87beceb.
| gharchive/issue | 2022-12-11T06:15:17 | 2025-04-01T06:45:01.089778 | {
"authors": [
"lunaisnotaboy"
],
"repo": "mint-lgbt/status",
"url": "https://github.com/mint-lgbt/status/issues/131",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2436732326 | 🛑 Matrix is down
In 16ee687, Matrix (https://matrix.mint.lgbt:8448/_matrix/federation/v1/version) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Matrix is back up in 148736b after 26 minutes.
| gharchive/issue | 2024-07-30T02:17:16 | 2025-04-01T06:45:01.092222 | {
"authors": [
"lunaisnotaboy"
],
"repo": "mint-lgbt/status",
"url": "https://github.com/mint-lgbt/status/issues/1706",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2523515176 | 🛑 Chibisafe is down
In a139333, Chibisafe (https://chibi.mint.lgbt/api/health) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Chibisafe is back up in 8a67a0c after 1 hour, 31 minutes.
| gharchive/issue | 2024-09-12T23:17:47 | 2025-04-01T06:45:01.094575 | {
"authors": [
"lunaisnotaboy"
],
"repo": "mint-lgbt/status",
"url": "https://github.com/mint-lgbt/status/issues/2656",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2686766361 | 🛑 Matrix is down
In c1ee185, Matrix (https://matrix.mint.lgbt:8448/_matrix/federation/v1/version) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Matrix is back up in 25f317b after 27 minutes.
| gharchive/issue | 2024-11-23T23:20:57 | 2025-04-01T06:45:01.096964 | {
"authors": [
"lunaisnotaboy"
],
"repo": "mint-lgbt/status",
"url": "https://github.com/mint-lgbt/status/issues/4026",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2700893476 | 🛑 Matrix is down
In 7a7546f, Matrix (https://matrix.mint.lgbt:8448/_matrix/federation/v1/version) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Matrix is back up in d16fa6c after 7 minutes.
| gharchive/issue | 2024-11-28T05:48:40 | 2025-04-01T06:45:01.099572 | {
"authors": [
"lunaisnotaboy"
],
"repo": "mint-lgbt/status",
"url": "https://github.com/mint-lgbt/status/issues/4147",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2702793616 | 🛑 Matrix is down
In 1488415, Matrix (https://matrix.mint.lgbt:8448/_matrix/federation/v1/version) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Matrix is back up in cde5a33 after 28 minutes.
| gharchive/issue | 2024-11-28T17:13:06 | 2025-04-01T06:45:01.101982 | {
"authors": [
"lunaisnotaboy"
],
"repo": "mint-lgbt/status",
"url": "https://github.com/mint-lgbt/status/issues/4161",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1146438648 | docker container fails with --extended-transactions
Describe the bug
When running mintapi from the provided docker image with --extended-transactions it fails.
What version of MintAPI are you using?
ghcr.io/mintapi/mintapi latest 0e29c480113c 3 weeks ago 1.56GB
What command[s] did you run / steps to reproduce?
Stacktrace/error received:
docker run --rm --shm-size=2g ghcr.io/mintapi/mintapi mintapi $username $password --headless --use-chromedriver-on-path --extended-transactions Traceback (most recent call last): File "/home/seluser/.local/bin/mintapi", line 8, in <module> sys.exit(main()) File "/home/seluser/.local/lib/python3.8/site-packages/mintapi/cli.py", line 507, in main data = mint.get_detailed_transactions( File "/home/seluser/.local/lib/python3.8/site-packages/mintapi/api.py", line 472, in get_detailed_transactions result = self.get_transactions_json( File "/home/seluser/.local/lib/python3.8/site-packages/mintapi/api.py", line 429, in get_transactions_json if __include_investments_with_transactions(id, include_investment): NameError: name '_Mint__include_investments_with_transactions' is not defined
What did you expect to happen?
Expected extended transactions output... a bunch of json.
What actually happened?
Application errored out.
Additional context
N/A
If it helps, regular -t doesn't work either. With or without the start date.
Using the latest docker image and upgrading selenium changes the behaviour:
+ /home/seluser/.local/bin/mintapi $username $password --use-chromedriver-on-path --headless --extended-transactions --start-date 03/05/22 Traceback (most recent call last): File "/home/seluser/.local/bin/mintapi", line 8, in <module> sys.exit(main()) File "/home/seluser/.local/lib/python3.8/site-packages/mintapi/cli.py", line 507, in main data = mint.get_detailed_transactions( File "/home/seluser/.local/lib/python3.8/site-packages/mintapi/api.py", line 483, in get_detailed_transactions df.amount = df.apply(reverse_credit_amount, axis=1) File "/home/seluser/.local/lib/python3.8/site-packages/pandas/core/generic.py", line 5612, in __setattr__ self[name] = value File "/home/seluser/.local/lib/python3.8/site-packages/pandas/core/frame.py", line 3645, in __setitem__ self._set_item_frame_value(key, value) File "/home/seluser/.local/lib/python3.8/site-packages/pandas/core/frame.py", line 3775, in _set_item_frame_value raise ValueError("Columns must be same length as key") ValueError: Columns must be same length as key
But it works without specifying the start date. Seems like there is a problem with comparing the start date. Weird that this doesn't happen on my non-docker system.
I guess this should be closed because --extended-transactions is no longer supported.
Thanks! Let me know if https://github.com/mintapi/mintapi/pull/477 addresses any docs concerns.
| gharchive/issue | 2022-02-22T04:23:06 | 2025-04-01T06:45:01.107442 | {
"authors": [
"mrooney",
"timdau"
],
"repo": "mintapi/mintapi",
"url": "https://github.com/mintapi/mintapi/issues/404",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1315027284 | irmin-pack: fix placement of gc callback
The callback and elapsed time calculation need to occur after the unlink step. Thanks @Ngoguey42 for spotting.
Codecov Report
Merging #2010 (d0b8fdc) into main (5920899) will increase coverage by 0.00%.
The diff coverage is 100.00%.
:exclamation: Current head d0b8fdc differs from pull request most recent head 3b6d4ee. Consider uploading reports for the commit 3b6d4ee to get more accurate results
@@ Coverage Diff @@
## main #2010 +/- ##
=======================================
Coverage 64.53% 64.54%
=======================================
Files 129 129
Lines 15556 15556
=======================================
+ Hits 10039 10040 +1
+ Misses 5517 5516 -1
Impacted Files
Coverage Δ
src/irmin-pack/unix/ext.ml
66.86% <100.00%> (ø)
src/irmin/commit.ml
64.19% <0.00%> (+0.32%)
:arrow_up:
:mega: Codecov can now indicate which changes are the most critical in Pull Requests. Learn more
Thanks!
| gharchive/pull-request | 2022-07-22T13:42:27 | 2025-04-01T06:45:01.120942 | {
"authors": [
"codecov-commenter",
"metanivek",
"samoht"
],
"repo": "mirage/irmin",
"url": "https://github.com/mirage/irmin/pull/2010",
"license": "isc",
"license_type": "permissive",
"license_source": "bigquery"
} |
541006405 | Invalid mutation name on account register
What I'm trying to achieve
Have the frontend send typo free data on customer register
Steps to reproduce the problem
Create an account in storefront
Look on graphql POST in dev console
What I expected to happen
Customer is send, not Cutomer. Quite few places to replace in the source code.
Screenshots
System information
Operating system: MacOS Catalina
Browser: Chrome
Now that I tested this I'm having the same issue on the StoreFront, I checked my Graphql server, but that mutation is not there.
@acidlake notice customerRegister inside the query which is the actual mutation that exists server-side. That is why it is working (or at least it works for me with newest storefront and saleor), the bug is just about the wrapper name that frontend sends, which has the typo (and is omitted server-side unlike the payload with proper mutation name)
| gharchive/issue | 2019-12-20T13:54:12 | 2025-04-01T06:45:01.152658 | {
"authors": [
"acidlake",
"tomaszszymanski129"
],
"repo": "mirumee/saleor-storefront",
"url": "https://github.com/mirumee/saleor-storefront/issues/542",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
308368886 | Rest Api
Good Day
What api can i use to add products programically? Need more info or solution to do this.
Thank you
Hi @albertusgeyser! Right now we are working heavily on implementing GraphQL API - check issues/pulls with api tag. Adding products is not finished yet, but should be done soon, see #1945.
| gharchive/issue | 2018-03-25T17:35:05 | 2025-04-01T06:45:01.154283 | {
"authors": [
"akjanik",
"albertusgeyser"
],
"repo": "mirumee/saleor",
"url": "https://github.com/mirumee/saleor/issues/1974",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
280186152 | KMLExportThread not thread safe
Expected Behavior
Multiple copies of KMLExportThread will work normally
Actual Behavior
Inconsistent rendering of KML symbols as either Path or Polygon.
Steps to Reproduce the Problem
Run KMLExportTest
Specifications
Version: All
Platform:
Subsystem:
Issue was with unit test not KMLExportThread. The renderer returns both a polygon and a path and the test was only processing the object at index 0.
| gharchive/issue | 2017-12-07T16:08:43 | 2025-04-01T06:45:01.171195 | {
"authors": [
"rsupnekar"
],
"repo": "missioncommand/emp3-android",
"url": "https://github.com/missioncommand/emp3-android/issues/396",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
1704921693 | Test accuracy of imagenet data set on MCU?
Great work, I was wondering how you tested the accuracy of testing the imagenet data set on MCU? Is it actually running on MCU? How is the test data stored on the MCU? I would like to do a baseline thank you very much.
Hi @ahdxwg,
We simulate the on-device inference with PyTorch (similar to how you do quantization-aware training) and measure the accuracy on servers.
| gharchive/issue | 2023-05-11T02:21:14 | 2025-04-01T06:45:01.189254 | {
"authors": [
"ahdxwg",
"meenchen"
],
"repo": "mit-han-lab/tinyengine",
"url": "https://github.com/mit-han-lab/tinyengine/issues/84",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1705228776 | Is it possible that key=z to 'Clear the List' only clears flows which are finished, and not ones which are in transit?
I like to clear the list often to see only specific flows, but often I accidentally wipe out flows which are in transit. The API call is then terminated and never passed to the client who initiated the API call.
So if my flow list shows 3 flows:
finished
in transit
finished
Hitting z (or maybe a custom command?) should then only show 1 flow afterwards:
in transit
Can this maybe be implemented via an addon?
Assuming you are using mitmproxy and not mitmweb, z is currently bound to view.flows.remove @all. You can probably add a custom keybinding with a different filter, see https://docs.mitmproxy.org/stable/concepts-commands/#custom-key-bindings and https://docs.mitmproxy.org/stable/concepts-filters/. :)
| gharchive/issue | 2023-05-11T07:25:23 | 2025-04-01T06:45:01.224254 | {
"authors": [
"Dima-369",
"mhils"
],
"repo": "mitmproxy/mitmproxy",
"url": "https://github.com/mitmproxy/mitmproxy/issues/6119",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
2173895253 | Change program end date to certificate creation date
What are the relevant tickets?
Fix https://github.com/mitodl/hq/issues/3622
Description (What does it do?)
The end date for the program on the certificate is the date the certificate was created.
To test this:
Create a program certificate, go to view the certificate, make sure the the end date on the certificate is showing the date it was created.
Screenshots (if appropriate):
Functional testing passed. Just some documentation updates requested then this is good to go.
Does the program model have an end_date attribute?
No. It's going to be unique to each learner, based on when they completed their last required course.
Let me know if that makes sense or needs more background.
The get_end_date in the FullProgramSerializer has no value since the program never ends, it is ongoing.
Should we just remove get_end_date?
Why do you think the doc string for get_end_date is incorrect?
I guess I'm confused why end_date is in the FullProgramSerializer if it never has a value. Looking at the code for it, it seems like it could have a value as long as there are course_runs, with end_dates, that are associated with the Program.
The get_end_date in the FullProgramSerializer has no relation to certificates. Can we merge the PR?
The get_end_date in the FullProgramSerializer has no relation to certificates. Can we merge the PR?
Yeah sure. Sorry to bother.
| gharchive/pull-request | 2024-03-07T13:41:26 | 2025-04-01T06:45:01.247867 | {
"authors": [
"annagav",
"collinpreston",
"pdpinch"
],
"repo": "mitodl/mitxonline",
"url": "https://github.com/mitodl/mitxonline/pull/2121",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
927294367 | Fix warning: comparison of integer expressions of different signedness
json.c:1224:26: warning: comparison of integer expressions of different signedness: ‘int’ and ‘size_t’ {aka ‘long unsigned int’} [-Wsign-compare]
1224 | if(n >= 0 && n < capacity)
| ~~^~~~~~~~~~
json.c:1231:14: warning: comparison of integer expressions of different signedness: ‘int’ and ‘size_t’ {aka ‘long unsigned int’} [-Wsign-compare]
1231 | if(n >= capacity)
| ~~^~~~~~~~~~~
Merged, thanks.
| gharchive/pull-request | 2021-06-22T14:25:45 | 2025-04-01T06:45:01.292105 | {
"authors": [
"mity",
"viccpp"
],
"repo": "mity/centijson",
"url": "https://github.com/mity/centijson/pull/8",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2647071615 | library: New component [SuperSpinner]
update: [SuperDropdown] Use items to optimize performance
add: [SuperSpinner] New component with support for both dropdown and dialog modes, and support for complex items
暂时缺少注释
怎么关了,我还没看
怎么关了,我还没看
现在应该好了
| gharchive/pull-request | 2024-11-10T10:11:47 | 2025-04-01T06:45:01.298261 | {
"authors": [
"HowieHChen",
"YuKongA"
],
"repo": "miuix-kotlin-multiplatform/miuix",
"url": "https://github.com/miuix-kotlin-multiplatform/miuix/pull/33",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
978079494 | Cannot write patterns after options
If you write the following, it will not work as expected.
eslint-interactive --ext .ts,.tsx,.vue src
This is equivalent to the behavior as if .ts,.tsx,.vue and src were passed to the ext option, according to the yargs specification (https://github.com/yargs/yargs/issues/1848).
eslint-interactive --ext .ts,.tsx,.vue --ext src
This confuses for the user because it is interpreted differently than eslint.
Workaround
You can avoid this problem by always writing the directory patterns first.
eslint-interactive src --ext .ts,.tsx,.vue
related works: https://github.com/tj/commander.js/blob/master/docs/options-taking-varying-arguments.md
| gharchive/issue | 2021-08-24T13:07:12 | 2025-04-01T06:45:01.347340 | {
"authors": [
"mizdra"
],
"repo": "mizdra/eslint-interactive",
"url": "https://github.com/mizdra/eslint-interactive/issues/7",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2037158022 | 有没有大佬推荐点入门教程
本人只会python,不想用gradio和streamlit了,想从这个项目为基础写点小网页,应该从什么开始学起啊
前端基础html css js 、vue3、fastapi,差不多可以开始二开项目了
| gharchive/issue | 2023-12-12T07:35:47 | 2025-04-01T06:45:01.348401 | {
"authors": [
"Mekako",
"mizhexiaoxiao"
],
"repo": "mizhexiaoxiao/vue-fastapi-admin",
"url": "https://github.com/mizhexiaoxiao/vue-fastapi-admin/issues/18",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2647525551 | exit show_help() properly
Hi,
The command mkjail getrelease -h outputs the help 2 times in a row.
That's because of a missing exit 0 at the end of the function called show_help().
Thank you.
Looks OK to me, @feld wrote the help, so I ask for his input.
| gharchive/pull-request | 2024-11-10T18:48:02 | 2025-04-01T06:45:01.395529 | {
"authors": [
"dlangille",
"okalm"
],
"repo": "mkjail/mkjail",
"url": "https://github.com/mkjail/mkjail/pull/53",
"license": "BSD-2-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
2537056843 | Add options for addSplatScene to set headers or other required request parameters
It would be great if the addSplatScene method also took in a request options parameter that would get passed to the eventual fetch request. This would allow for appropriate request parameters to be set such as headers for authentication purposes.
@mkkellogg I have created a PR for this change, I've been testing and using it on my forked repository with my own project and have had no issues. Let me know if you would like any changes!
https://github.com/mkkellogg/GaussianSplats3D/pull/354
Sorry for the late response on this, I've been super busy lately. Hopefully I can take a closer look at this soon.
| gharchive/issue | 2024-09-19T18:10:31 | 2025-04-01T06:45:01.397562 | {
"authors": [
"jesse-small",
"mkkellogg"
],
"repo": "mkkellogg/GaussianSplats3D",
"url": "https://github.com/mkkellogg/GaussianSplats3D/issues/337",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
382781164 | rebased
fixed another conflict as I merged the last commit and it was easy to resolve the conflict.
Once this is merged do you want to implement it in Prometheus and I will start a benchmark test to check if it works as expected.
btw if you can ping me on irc @prometheus-dev to speed things up if you want.
Ok, i'll merge and start testing too. https://github.com/prometheus/prometheus/pull/4230 has the changes needed from prometheus, though that build is going to fail without the changes here. Namely, it doesn't have the needed tsdb.Config fields that are defined here.
yeah that PR need to be buildable as the benchmarking builds from the actual code in the PR
Any idea how to make it buildable without putting all the tsdb changes in? It seems like in general they merge tsdb changes all in at once on a semi-regular basis.
Also, I the test is breaking after merging. It seems that the time waiting for the block to persist isn't long enough. I tried increasing it and the test passes but with the given time it doesn't work. Was there a reason for the length of time given? Is it ok to extend it? @krasi-georgiev
No reason for the time so can increase as needed.
for the benchmark we want to prove that these changes are fine to merge in tsdb for this we need to pull all of these in Prometheus itself. This is the best way to make sure the tsdb changes will work exactly as planned in Prometheus.
I created a branch on tsdb https://github.com/prometheus/tsdb/tree/pull/343-orig
so you can use govendor to pull this into prometheus and open a PR with it so I can run the test.
from root dir
govendor fetch github.com/prometheus/tsdb/...@pull/343-orig
actually maybe something like this should work
GO111MODULE=on go get github.com/prometheus/tsdb@pull/343-orig
no need to put it anywhere just run this command in the root
the exact channel name is #prometheus-dev
| gharchive/pull-request | 2018-11-20T17:50:53 | 2025-04-01T06:45:01.410423 | {
"authors": [
"krasi-georgiev",
"mknapphrt"
],
"repo": "mknapphrt/tsdb",
"url": "https://github.com/mknapphrt/tsdb/pull/6",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1949485501 | Fix explorer build script for DockerHub
The release pipeline fails on building the explorer script. It was not properly updated for our switch to DockerHub.
Works now: https://hub.docker.com/r/fndnt/data_explorer
Good catch, updated and tested (the explorer one).
| gharchive/pull-request | 2023-10-18T11:11:03 | 2025-04-01T06:45:01.416594 | {
"authors": [
"RobbeSneyders"
],
"repo": "ml6team/fondant",
"url": "https://github.com/ml6team/fondant/pull/531",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1745269819 | Support for NextJs
I had some trouble running this with next js (nothing major, some libraries not working out of the box). Would there be interest in me contributing code that I used to make it work?
You can see the changes needed to tvmjs here:
https://github.com/narangkay/mlc-ai-relax
and the changes to web-llm here:
https://github.com/narangkay/web-llm
Sorry, I somehow missed your reply. My repo is kind of super out of date, and I had hacked super deep into the tvmjs repo to make things work. However, I tried using the freshest version of both web-llm and tvmjs, and was able to make it work with very minimal changes
"ignoreDynamicRequires: true," in common js plugin for rollup
A critical webpack config in the next project itself, which was super painful to figure out but seems to be working :)
webpack: (config, { isServer }) => {
// Fixes npm packages that depend on `fs` module
if (!isServer) {
config.resolve.fallback = {
...config.resolve.fallback, // if you miss it, all the other options in fallback, specified
// by next.js will be dropped. Doesn't make much sense, but how it is
fs: false, // the solution
module: false,
perf_hooks: false,
};
}
return config
},
I sent over a PR for the issue, PTAL. Looking forward to retiring the clones of web-llm and tvmjs I had to create to make this work :)
Thanks for the merge! What is the process for releasing to npm? I can volunteer to test the release in one of my apps to confirm it works :)
Let me do a new release soon
Just published a new release based on the latest change
why did you close it. It still doesn't work. Spent a lot of time. If you build from source then production build fails.
| gharchive/issue | 2023-06-07T07:43:20 | 2025-04-01T06:45:01.420689 | {
"authors": [
"djaffer",
"narangkay",
"tqchen"
],
"repo": "mlc-ai/web-llm",
"url": "https://github.com/mlc-ai/web-llm/issues/126",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1845252904 | RuntimeError: shape '[213568]' is invalid for input of size 262144 on criteo1tb
I'm running criteo1tb with the PyTorch baseline on a single GPU
$ CUDA_VISIBLE_DEVICES=5 python submission_runner.py --framework=pytorch --workload=criteo1tb --data_dir=$HOME/data/criteo --experiment_dir=$HOME/experiments --experiment_name=my_first_experiment --submission_path=reference_algorithms/development_algorithms/criteo1tb/criteo1tb_pytorch/submission.py --tuning_search_space=reference_algorithms/development_algorithms/criteo1tb/tuning_search_space.json --overwrite --torch_compile=0
with a reduced set of the input data
diff --git a/algorithmic_efficiency/workloads/criteo1tb/input_pipeline.py b/algorithmic_efficiency/workloads/criteo1tb/input_pipeline.py
index 17877d06..2f63adee 100644
--- a/algorithmic_efficiency/workloads/criteo1tb/input_pipeline.py
+++ b/algorithmic_efficiency/workloads/criteo1tb/input_pipeline.py
@@ -93,8 +93,9 @@ def get_criteo1tb_dataset(split: str,
repeat_final_dataset: bool = False):
"""Get the Criteo 1TB dataset for a given split."""
num_test_files = _NUM_DAY_23_FILES // 2 + 1
- if split in ['train', 'eval_train']:
- file_paths = [os.path.join(data_dir, f'day_{d}_*') for d in range(0, 23)]
+ if split in ['train', 'eval_train'] or True:
+ #file_paths = [os.path.join(data_dir, f'day_{d}_*') for d in range(0, 23)]
+ file_paths = [os.path.join(data_dir, f'day_0_000.csv')]
elif split == 'validation':
# Assumes files are of the format day_23_04.
file_paths = [
but it seems to fail (even without torch.compile) with
I0810 06:58:29.672421 139861006926848 submission_runner.py:259] Starting training loop.
I0810 06:58:41.968837 139861006926848 spec.py:320] Evaluating on the training split.
Traceback (most recent call last):
File "/data/users/ezyang/a/algorithmic-efficiency/submission_runner.py", line 604, in <module>
app.run(main)
File "/home/ezyang/local/a/pytorch-env/lib/python3.10/site-packages/absl_py-1.4.0-py3.10.egg/absl/app.py", line 308, in run
_run_main(main, args)
File "/home/ezyang/local/a/pytorch-env/lib/python3.10/site-packages/absl_py-1.4.0-py3.10.egg/absl/app.py", line 254, in _run_main
sys.exit(main(argv))
File "/data/users/ezyang/a/algorithmic-efficiency/submission_runner.py", line 575, in main
score = score_submission_on_workload(
File "/data/users/ezyang/a/algorithmic-efficiency/submission_runner.py", line 500, in score_submission_on_workload
timing, metrics = train_once(workload, global_batch_size,
File "/data/users/ezyang/a/algorithmic-efficiency/submission_runner.py", line 312, in train_once
latest_eval_result = workload.eval_model(global_eval_batch_size,
File "/data/users/ezyang/a/algorithmic-efficiency/algorithmic_efficiency/spec.py", line 321, in eval_model
train_metrics = self._eval_model_on_split(
File "/data/users/ezyang/a/algorithmic-efficiency/algorithmic_efficiency/workloads/criteo1tb/workload.py", line 133, in _eval_model_on_split
loss += self._eval_batch(params, eval_batch)
File "/data/users/ezyang/a/algorithmic-efficiency/algorithmic_efficiency/workloads/criteo1tb/criteo1tb_pytorch/workload.py", line 233, in _eval_batch
summed_loss = self.loss_fn(
File "/data/users/ezyang/a/algorithmic-efficiency/algorithmic_efficiency/workloads/criteo1tb/criteo1tb_pytorch/workload.py", line 57, in loss_fn
mask_batch = torch.reshape(mask_batch, (batch_size,))
RuntimeError: shape '[213568]' is invalid for input of size 262144
Try dividing this by 8 https://github.com/mlcommons/algorithmic-efficiency/blob/main/algorithmic_efficiency/workloads/criteo1tb/criteo1tb_pytorch/workload.py#L25
Their batch sizes assume 8 GPUs
Dividing it by 8 doesn't work:
RuntimeError: shape '[16960]' is invalid for input of size 32768
but it does seem plausible that the single gpu math is wrong. need to read some more...
I tried running with DDP and 8 GPUs and got RuntimeError: shape '[26696]' is invalid for input of size 32768. So maybe the problem is just hard-coded training set size that doesn't work with the smaller dataset.
(That was the problem)
So I tilted a bit trying to find the right configuration to make something run on a single V100 WMT model, it'd be really nice for local debugging to have parameters divided depending on how many GPUs are available
Will add clarifying instructions in README FAQs on gotchas when running with single GPU.
| gharchive/issue | 2023-08-10T13:59:23 | 2025-04-01T06:45:01.425888 | {
"authors": [
"ezyang",
"msaroufim",
"priyakasimbeg"
],
"repo": "mlcommons/algorithmic-efficiency",
"url": "https://github.com/mlcommons/algorithmic-efficiency/issues/475",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
158734171 | How to run?
Hi, sorry but I am new at this.
I test the package by running "require("ml-naivebayes")" in the browser, I got an error saying:
/app/available_modules/1465234699000/ml-naivebayes/node_modules/ml-matrix/src/matrix.js:6
class Matrix extends Array {
^^^^^
Unexpected reserved word
Is your package or exactly how to require your package and run?
Hello @sophieyoung717, are you running it in Tonic? If that's true you should check that it's running in the NodeJS version v4.4.5 or superior.
Also if you prefer we can add an example in Tonic by default.
Thanks!
| gharchive/issue | 2016-06-06T17:44:02 | 2025-04-01T06:45:01.511663 | {
"authors": [
"maasencioh",
"sophieyoung717"
],
"repo": "mljs/naive-bayes",
"url": "https://github.com/mljs/naive-bayes/issues/1",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
1711528327 | feat: add missing DnD-Context-Props to Plate via draggable-property
BREAKING: replace onDragEnd-property with more detailed draggable-property
BREAKING: delete well-data from FilledWell
See https://github.com/angular/angular/blob/main/CONTRIBUTING.md#-commit-message-format:
Breaking Change section should start with the phrase "BREAKING CHANGE: " followed by a summary of the breaking change, a blank line, and a detailed description of the breaking change that also includes migration instructions.
| gharchive/pull-request | 2023-05-16T08:34:31 | 2025-04-01T06:45:01.513689 | {
"authors": [
"simbig",
"spawnia"
],
"repo": "mll-lab/react-components",
"url": "https://github.com/mll-lab/react-components/pull/237",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
478119170 | [GNMT] Removed warmup run from load_samples_to_ram
@christ1ne and @briandersn , I removed this part from the code. This was a hack as the first query doesn't make latency constraints, but it does not make much sense to have the SUT warmup inside this function.
I think it is still good to keep the warm up. Maybe move the code to https://github.com/mlperf/inference/blob/85d05e3e2dbdfa67cfed6d2af5ad4d5512ab337b/v0.5/translation/gnmt/tensorflow/loadgen_gnmt.py#L509 ?
Hi @christ1ne, it's a good idea to keep the warmup. However, we have to do it right after LoadSamplesToRam is called and right before querries get sent to SUT. Unfortunately, both of these calls happen from within StartTest. This could be solved if we had a loadgen::warmup hook, so loadgen could make that call in between samples being loaded and querries being sent. @briandersn, do you think that sounds reasonable?
That sounds reasonable. It is one of the open tasks to add warmup hooks to
the loadgen that: a) does what your loadgen::warmup would do + b) warm up
the loadgen itself. It's a marked low priority (2) right now, but we could
bump it up.
On Fri, Aug 9, 2019 at 9:17 AM Maximilien Breughe notifications@github.com
wrote:
Hi @christ1ne https://github.com/christ1ne, it's a good idea to keep
the warmup. However, we have to do it right after LoadSamplesToRam is
called and right before querries get sent to SUT. Unfortunately, both of
these calls happen from within StartTest. This could be solved if we had a
loadgen::warmup hook, so loadgen could make that call in between samples
being loaded and querries being sent. @briandersn
https://github.com/briandersn, do you think that sounds reasonable?
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/mlperf/inference/pull/322?email_source=notifications&email_token=AHX6NYSP5Z673RI55KPVR2TQDWKB7A5CNFSM4IKD4YH2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOD37D5OQ#issuecomment-519978682,
or mute the thread
https://github.com/notifications/unsubscribe-auth/AHX6NYTHONASJWFC5SPLIPLQDWKB7ANCNFSM4IKD4YHQ
.
looks good.
| gharchive/pull-request | 2019-08-07T20:15:13 | 2025-04-01T06:45:01.538529 | {
"authors": [
"briandersn",
"christ1ne",
"nvmbreughe"
],
"repo": "mlperf/inference",
"url": "https://github.com/mlperf/inference/pull/322",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
223770493 | App crashes on minimumDate
Setting minimum date crash app
<DateTimePicker
isVisible={this.state.isDateTimePickerVisibleTo}
onConfirm={this._handleDatePickedTo}
onCancel={this._hideDateTimePickerTo}
minimumDate={this.state.dateFrom}
maximumDate={new Date()}
/>
this.state.dateFrom cames from another date picker
and the code was working before when I commented minimumDate every thing works fine
Hi!
What version of RN are you using? Does the bug happens on both iOS and Android?
It sounds more like a bug of react-native's own pickers, could you try to use them and see if they work correctly?
Thanks!
Hi,
I use react native 0.43.3
it crashes on both android and ios
when i setted datePickerModeAndroid as spinner works fine on android
also the datePicker android works fine with both calendar and spinner with minDate
I didn't try it yet on ios after i setted datePickerModeAndroid
same here on 0.43.4
Does the following example work for you?
Here I'm setting minimumDate with the last picked date from the picker and it is working fine for me (iOS). 🤔
import React, { Component } from 'react';
import { Text, TouchableOpacity, View } from 'react-native';
import DateTimePicker from 'react-native-modal-datetime-picker';
import styles from './app.style';
export default class DateTimePickerTester extends Component {
state = {
isDateTimePickerVisible: false,
minimumDate: new Date(),
};
_showDateTimePicker = () => this.setState({ isDateTimePickerVisible: true });
_hideDateTimePicker = () => this.setState({ isDateTimePickerVisible: false });
_handleDatePicked = date => {
console.log('A date has been picked: ', date);
// Setting the minimumDate as the last picked date
this.setState({ minimumDate: date });
this._hideDateTimePicker();
};
render() {
return (
<View style={styles.container}>
<TouchableOpacity onPress={this._showDateTimePicker}>
<View style={styles.button}>
<Text>Show DatePicker</Text>
</View>
</TouchableOpacity>
<DateTimePicker
isVisible={this.state.isDateTimePickerVisible}
onConfirm={this._handleDatePicked}
onCancel={this._hideDateTimePicker}
minimumDate={this.state.minimumDate}
/>
</View>
);
}
}
Try with setting maximumDate with minimumDate
my code works on ios and android after i setted datePickerModeAndroid as spinner
Try with setting maximumDate with minimumDate
I did, it's working fine.
my code works on ios and android after i setted datePickerModeAndroid as spinner
That's weird, because datePickerModeAndroid is not even checked by iOS, maybe it was a caching issue?
My code still gives error when I remove datePickerModeAndroid on android and ios
but I can't reproduce and I tried Your code is working fine
I tried it with resetting minimumDate dynamically and also work fine
so thank You a lot if You want You can close this issue
Feel free to re-open if you'll be able to reproduce it!
Hello, @arabscimitar Actually I am using this module .
My ```
"react": "16.0.0-alpha.12",
"react-native": "0.46.1",
But, **you are saying that to change react-native version 0.43.3**. But I am using another module **urban airship But it's accepts react-native-version>= 0.44.0,**
what can I do But i want to show an ,
datePickerModeAndroid={'spinner'}
i could not reproduce i am not sure why it is happens
But I want Spinner Mode in android but it shows default mode
Need to reopen this issue as its still persist
Having the same issue only on Android with mode "date"
if you try to add the minimumDate props with mode "date" on android App is crashed
working fine on Ios and also working fine with other mode like datetime,time
this will only happen if we passed minimumDate prop on date mode (Android)
"react-native-modal-datetime-picker": "^9.0.0",
"react-native": "0.63.3",
| gharchive/issue | 2017-04-24T10:17:50 | 2025-04-01T06:45:01.568631 | {
"authors": [
"arabscimitar",
"lavarajallu",
"mmazzarolo",
"mubashiralisiddiqui",
"stephanedemotte"
],
"repo": "mmazzarolo/react-native-modal-datetime-picker",
"url": "https://github.com/mmazzarolo/react-native-modal-datetime-picker/issues/32",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
141572171 | Local Translation Offset seems to do nothing
First of all I'd like to thank you for the brilliant plugin!
I can't figure out how to use Offsets > Local Translation Offset. It doesn't work for me, whatever values I use. Is there a bug or am I missing something?
Hi ababak,
Can you send me an example scene? The offset should move the object in its local axes.
Thanks!
Mariano.
Hi Mariano,
Here's the scene I've been using.
Thanks!
Regards,
Andrey.
curve_array_test.mb.zip
Hi ababak!
Yup, it seems I was missing some additional operations in the code. I've updated it, and now should be working! Note that in chain mode it will not be continuous.
It is now fixed in 861be47f2357d9792a469a21f1d5e11b4bb26802
Great, thanks!
| gharchive/issue | 2016-03-17T12:44:16 | 2025-04-01T06:45:01.574017 | {
"authors": [
"ababak",
"mmerchante"
],
"repo": "mmerchante/instanceAlongCurve",
"url": "https://github.com/mmerchante/instanceAlongCurve/issues/4",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
1816602600 | RegExMatch output variable causes incorrect transformations
Input:
if RegExMatch(uri, "^\[url=")
RegExMatch(uri, "\G[^\]]*", uri, 6)
else
{
MsgBox 1,, URI appears invalid:`n%uri%
IfMsgBox Cancel
return
}
Incorrect output:
if RegExMatch(uri, "^\[url=")
RegExMatch(uri[0], "\G[^\]]*", &uri, 6)
else
{
msgResult := MsgBox("uri[0] appears invalid:`n" uri[0], "", 1)
if (msgResult = "Cancel")
return
}
There are multiple issues:
The call which replaces the string in uri with a RegExMatchInfo incorrectly has its input changed from uri to uri[0]. On input, the variable still contains a string.
uri still contains a string in the else branch, so should not have been transformed.
The text "URI" inside the quoted string should not have been transformed even if it was in the if branch.
I think a simpler, less error-prone approach would be for uri := uri && uri[0] to be inserted after the RegExMatch call. Appending && uri := uri[0] to the call would also work if the return value is not being stored.
Properties with the same name are also erroneously transformed.
Input:
RegExMatch(haystack, regex, output)
x := task.output
Incorrect output:
RegExMatch(haystack, regex, &output)
x := task.output[0]
Unrelated variables and parameter names elsewhere in the script (perhaps hundreds of lines away?) are also transformed.
Input:
a(s, r) {
RegExMatch(s, r, m)
return m
}
b(m) {
return m
}
Incorrect output:
a(s, r) {
RegExMatch(s, r, &m)
return m[0]
}
b(m[0]) {
return m[0]
}
For the maintainers, I will attempt to cleanup and clarify parts about _RegExMatch and ConvertPseudoArray.
I'll write back about things that might require more changes.
For the maintainers, I will attempt to cleanup and clarify parts about _RegExMatch and ConvertPseudoArray. I'll write back about things that might require more changes.
Thanks for your note, its helpful so people don't do duplicate work. If you work in a branch on your own repo, you can just tag this issue and then we should see a notification here.
Remember to add the associated test cases. You could just use the examples provided by Lexikos
The current way of updating the names is a simple replacement on a full line. I see it difficult to improve much without some bigger changes.
The changes made are:
Putting the reason for name replacements in a less ambiguous way.
A small change to not replace properties.
Restrict recognition of named captures to mitigate intrusive replacements. The replacement will only be done with names that can be gathered in the function conversion (this leaves out contents that could be inside variables).
But sadly:
There are still replacements inside quoted text.
There is still no limit on the lifetime of the name conversion.
Attempting a simpler replacement for uri && uri[0] I noticed strange repetitions in other cases, things similar to uri && uri[0] && uri[0]. To add it as a line definitions it needs to be done more carefully. Attempting to add the definition inside the same line I needed to add wrapping parenthesis but this was causing subLoopFunctions code to get stuck.
To fix those it seems better to try to find a better approach than the current full line replacement, instead of creating more workarounds.
partially completed in 69269b3edfdd482e1fe96547f694c25893d9344f
a(s, r) {
RegExMatch(s, r, m)
return m
}
b(m) {
return m
}
This will be corrected with Masking pull request once submitted (soon)
Unrelated variables and parameter names elsewhere in the script (perhaps hundreds of lines away?) are also transformed.
Input:
a(s, r) {
RegExMatch(s, r, m)
return m
}
b(m) {
return m
}
Incorrect output:
a(s, r) {
RegExMatch(s, r, &m)
return m[0]
}
b(m[0]) {
return m[0]
}
This will be corrected with Masking pull request once submitted (soon)
Unrelated variables and parameter names elsewhere in the script (perhaps hundreds of lines away?) are also transformed.
Input:
a(s, r) {
RegExMatch(s, r, m)
return m
}
b(m) {
return m
}
Incorrect output:
a(s, r) {
RegExMatch(s, r, &m)
return m[0]
}
b(m[0]) {
return m[0]
}
This will be corrected within Masking pull request coming soon (already tested and corrected).
I think PR #208 will have some affect on this issue. It is still not the full fix, but progress. More work will be needed to address the global scope of the issue outlined in this thread. Possibly a rewrite of how RegexMatch (and other structures are) is handled in general, rather than multiple band-aids (as @safetycar mentioned). The upcoming PR that will introduce class/function/string masking may come in handy for this as well. It can be extended to include other structures such as IF, Regex, etc, which may provide alternate fronts of attack.
| gharchive/issue | 2023-07-22T04:18:52 | 2025-04-01T06:45:01.589539 | {
"authors": [
"Lexikos",
"andymbody",
"mmikeww",
"safetycar"
],
"repo": "mmikeww/AHK-v2-script-converter",
"url": "https://github.com/mmikeww/AHK-v2-script-converter/issues/118",
"license": "Unlicense",
"license_type": "permissive",
"license_source": "github-api"
} |
2477751014 | Parameters for Z and f2
parametrized by BW, masses and widths should be in the paper
3893.0 ± 2.3 ± 19.9
44.2 ± 5.4 ± 9.1
| gharchive/issue | 2024-08-21T11:07:10 | 2025-04-01T06:45:01.591324 | {
"authors": [
"mmikhasenko"
],
"repo": "mmikhasenko/Jpsipipi",
"url": "https://github.com/mmikhasenko/Jpsipipi/issues/1",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
244840620 | Running app as root breaks on Wayland
The application launcher starts this app as root, which should be avoided, as it breaks the app on Wayland. Systemd can handle root requests itself when needed. I suggest changing the app launcher/.desktop to just start it with systemd-manager instead of pkexec systemd-manager (aliased as systemd-manager-pkexec).
hi,
what should i do to run the app with gnome wayland?
@Cherkah If you have a recent version of Gnome it should use Wayland by default, otherwise you can select Gnome instead of Gnome on Xorg on the login screen. If you run echo $XDG_SESSION_TYPE in your terminal it should say if you're using Wayland or X.
If that says Wayland then this app will also run natively on Wayland.
ok,
it's my case (debian stretch testing...) so gnome with wayland by default.
as the rustup process do not work with waykand i'll remove rustup and install systemd-manager via the installation script so as to get a proper integration with PolicyKit.
lets a try and confirme next
If that says Wayland then this app will also run natively on Wayland.
sorry but the app do not run on wayland. after the password nothing append.
Yes, that's exactly the problem. Apps can't run as root on Wayland. Only actions that need root access should request root permissions.
Systemd can handle root requests itself when needed.
This is not true. You are referring to the systemctl command. This application is not using systemctl, but communicating directly to systemd via it's dbus IPC API. Attempting to tell systemd to enable/disable a service will give you an interactive authentication required error.
| gharchive/issue | 2017-07-22T11:03:06 | 2025-04-01T06:45:01.644765 | {
"authors": [
"Cherkah",
"flipflop97",
"mmstick"
],
"repo": "mmstick/systemd-manager",
"url": "https://github.com/mmstick/systemd-manager/issues/48",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
649892083 | chore: Remove active element check
We should be aware to what extent we rely on the order of events and why the events are fired in that order. I'm being a bit nitpicky because it seems like this relies on react internals regarding event handling which are currently changing (they want to switch from focus to focusin).
Hopefully the comment makes sense. I used https://codesandbox.io/s/zealous-sky-zdy69?file=/src/App.js to understand event order when using native element.addEventListener.
Passes yarn test:unit and yarn test:karma (with browserstack) locally
stale
| gharchive/pull-request | 2020-07-02T12:28:16 | 2025-04-01T06:45:01.653566 | {
"authors": [
"eps1lon"
],
"repo": "mnajdova/material-ui",
"url": "https://github.com/mnajdova/material-ui/pull/2",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
494878236 | Better handling of montage files
When reading montage files, users seem to face a number of issues.
Documentation of the different file formats is not clear. For example, *.hpts should not have a header and a column of type. This is not clearly explained anywhere. You cannot figure this out without looking at the code.
Inconsistent handling of numpy versions when reading text files. Here it is handled properly: https://github.com/mne-tools/mne-python/blob/master/mne/channels/montage.py#L325-L328 but this is not the case in all calls. Ideally, this should be abstracted away into a private helper function.
maybe an easy issue for @fmamashli to solve who encountered this in the first place
FYI the read_montage is currently being deprecated in
https://github.com/mne-tools/mne-python/pull/6764
to make sure we are addressing the issue @fmamashli
https://github.com/fmamashli can you clarify the usecase? How was the
.hpts
file produced? what does it contain? points in what coordinate system?
we bumped the minimum version of numpy. We no longer need to keep both versions of skip_row / header.
We digitized the channels using a new system: Localite.
.hpts: we followed MNE manual: EEG, channels name, coordinates (x,y,z) in mm.
The coordinate system is RAS.
The problem was that in MNE-python, there was no good documentation on the format of the files and what exactly it should contain. like headers, columns, coordinate system.
also just to add to what @fmamashli said, see the documentation of mne-c. It is clearly mentioned what the format should look like.
if you use the current master you can try this command:
https://mne.tools/dev/generated/mne.channels.make_dig_montage.html#mne.channels.make_dig_montage
provided you have the locations already read in python
you could ideally specify that the points are in MRI RAS and then it should
work just fine (hopefully).
share some fake data if needed.
We managed for now. But I would say you need a documentation page analogous to this one for reading in raw files.
should be fine now with:
https://mne.tools/dev/auto_tutorials/misc/plot_sensor_locations.html#sphx-glr-auto-tutorials-misc-plot-sensor-locations-py
https://mne.tools/dev/overview/implementation.html#dig-formats
Cool !
| gharchive/issue | 2019-09-17T22:13:24 | 2025-04-01T06:45:01.678006 | {
"authors": [
"agramfort",
"fmamashli",
"jasmainak",
"massich"
],
"repo": "mne-tools/mne-python",
"url": "https://github.com/mne-tools/mne-python/issues/6782",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
585687902 | mne.Report.parse_folder doesn't recognize '_meg.fif' files
Describe the bug
Report.parse_folder wouldn't recognize BIDS-format raw files (*meg.fif), altough the documentation example
Getting started with mne.Report suggests it should.
Steps to reproduce
Create the raw data object and save it with a filename ending with meg.fif
Use Report.parse_folder to create report for this file
from mne import create_info
from mne import Report
from mne.io import RawArray
import numpy as np
n_ch = 5
info = create_info(n_ch, sfreq=200)
data = np.random.randn(n_ch, 1000)
RawArray(data, info).save("test_meg.fif")
report = Report()
report.parse_folder('.', pattern='*meg.fif', render_bem=False)
report.save("report.html")
Expected results
This is how it looks if I save the raw object as 'test_raw.fif' instead of 'test_meg.fif'
Actual results
What I actually get is an empty report.
Here's the console output of the script above:
Creating RawArray with float64 data, n_channels=5, n_times=1000
Range : 0 ... 999 = 0.000 ... 4.995 secs
Ready.
Writing /home/dmalt/Code/python/playground/parse_folder/test_meg.fif
Closing /home/dmalt/Code/python/playground/parse_folder/test_meg.fif [done]
Embedding : jquery.js
Embedding : jquery-ui.min.js
Embedding : bootstrap.min.js
Embedding : jquery-ui.min.css
Embedding : bootstrap.min.css
Iterating over 0 potential files (this may take some time)
Saving report to location /home/dmalt/Code/python/playground/parse_folder/report.html
Rendering : Table of Contents
Additional information
Platform: Linux-5.3.0-42-generic-x86_64-with-glibc2.10
Python: 3.8.1 (default, Jan 8 2020, 22:29:32) [GCC 7.3.0]
Executable: /home/dmalt/.miniconda3/envs/mne_dev/bin/python
CPU: x86_64: 8 cores
Memory: Unavailable (requires "psutil" package)
mne: 0.20.dev0
numpy: 1.18.1 {blas=mkl_rt, lapack=mkl_rt}
scipy: 1.4.1
matplotlib: 3.1.3 {backend=Qt5Agg}
sklearn: Not found
numba: Not found
nibabel: Not found
cupy: Not found
pandas: Not found
dipy: Not found
mayavi: Not found
pyvista: Not found
vtk: Not found
Looking at the source code at mne/report.py suggests it should be an easy fix. I can create a PR for this. I'm not sure though if this is just a documentation problem and you didn't intend to support the BIDS naming for raw files yet.
PR welcome
| gharchive/issue | 2020-03-22T10:32:18 | 2025-04-01T06:45:01.683414 | {
"authors": [
"agramfort",
"dmalt"
],
"repo": "mne-tools/mne-python",
"url": "https://github.com/mne-tools/mne-python/issues/7493",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
958305118 | [BUG] Coordinate Frame not Saved
I'm not sure if I'm missing something obvious but it looks like the coordinate frame doesn't survive the round trip to and from disk. I'm not sure why that is.
import numpy as np
import mne
for ch_type in ('eeg', 'seeg', 'ecog', 'dbs'):
raw = mne.io.RawArray(
np.random.random((10, 1000)),
mne.create_info([f'ch{i}' for i in range(1, 11)], 1000, [ch_type] * 10))
montage = mne.channels.make_dig_montage(
{ch: np.random.random((3,)) for ch in raw.ch_names}, lpa=[1, 0, 0], nasion=[0, 0.1, 0], rpa=[-1, 0, 0],
coord_frame='mri')
raw.set_montage(montage)
raw.save('tmp-raw.fif', overwrite=True, verbose=False)
print(ch_type)
print('Before: ', raw.info['chs'][0]['coord_frame'])
info = mne.io.read_info('tmp-raw.fif')
print('After : ', info['chs'][0]['coord_frame'])
Note, it appears this is specific to seeg, ecog and dbs, it works for eeg.
To update, it looks like this was on purpose but I think it would be better to assume that seeg, ecog or dbs channel positions are in head coordinate frames because if they are set using inst.set_montage(montage) they will be converted. To my knowledge, there are no other public APIs to set the channel positions so it seems like they should be assumed to be in head if read from a disk.
@larsoner, this is in https://github.com/mne-tools/mne-python/pull/9585/, just wanted to make sure you were aware and thought this was reasonable
Yeah I think it's just an oversight that we never added these channel types to the inferred-coord-frame list
Great, just making sure it wasn't for a reason I wasn't thinking of
| gharchive/issue | 2021-08-02T16:22:40 | 2025-04-01T06:45:01.687302 | {
"authors": [
"alexrockhill",
"larsoner"
],
"repo": "mne-tools/mne-python",
"url": "https://github.com/mne-tools/mne-python/issues/9635",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
1499686189 | fix: missing letter
Proposed Changes
fix: missing letter Y and j
Good catch!
| gharchive/pull-request | 2022-12-16T06:43:53 | 2025-04-01T06:45:01.725768 | {
"authors": [
"depapp",
"dmitry-zaitsev"
],
"repo": "mobile-dev-inc/maestro",
"url": "https://github.com/mobile-dev-inc/maestro/pull/515",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2243614436 | 🛑 Server Mobility Testing is down
In 3878286, Server Mobility Testing (https://mobilitysol.com:30443) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Server Mobility Testing is back up in d9d45c6 after 14 minutes.
| gharchive/issue | 2024-04-15T12:55:58 | 2025-04-01T06:45:01.740294 | {
"authors": [
"mobilitysol"
],
"repo": "mobilitysol/monitorweb",
"url": "https://github.com/mobilitysol/monitorweb/issues/2098",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1698696576 | 🛑 Server Mobility Produccion is down
In e6867b3, Server Mobility Produccion (https://mobilitysol.com:20443) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Server Mobility Produccion is back up in 7b36a67.
| gharchive/issue | 2023-05-06T16:37:28 | 2025-04-01T06:45:01.742713 | {
"authors": [
"mobilitysol"
],
"repo": "mobilitysol/monitorweb",
"url": "https://github.com/mobilitysol/monitorweb/issues/944",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1609857045 | I can't upload my text file to firebase storeage
I've followed the instructions and set all the parameter accordingly. The code seems fine when it comes to writing something into SPIFFS but when it comes to uploading it to the firbase storage it says "File not found". Please help, thanks.
This repository is for the library that works with RTDB database and legacy Firebase Cloud Messaging only.
You should read comments in the library examples carefully.
That is because file name you provide to the upload function does not exist in flash filesystem.
You can upload the sketch data using the upload tool plugin in Arduino IDE by creating the data folder with your files inside at the same path of your sketch file.
The simplest way to test without upload sketch data is testing with download example first.
Then test upload example with the file that you download previously.
Hi, I’ve tried the DownloadFile and it worked fine. It can download and read the file in Firebase Storage. But when I swap to UploadFile and change mime type to “text/plain”, the same message appeared “File not found”.
Sent from Mailhttps://go.microsoft.com/fwlink/?LinkId=550986 for Windows
From: Suwatchai @.>
Sent: Sunday, 5 March, 2023 2:53 AM
To: @.>
Cc: MUHAMMAD DANISH BIN @.>; @.>
Subject: Re: [mobizt/Firebase-ESP32] I can't upload my text file to firebase storeage (Issue #267)
That is because file name you provide to the upload function does not exist in flash filesystem.
You can upload the sketch data using the upload tool plugin in Arduino IDE by creating the data folder with your files inside at the same path of your sketch file.
The simplest way to test without upload sketch data is testing with download example first.
Then test upload example with the file that you download previously.
—
Reply to this email directly, view it on GitHubhttps://github.com/mobizt/Firebase-ESP32/issues/267#issuecomment-1454840288, or unsubscribehttps://github.com/notifications/unsubscribe-auth/AR4AWRPOFCL565HXRU6MJMLW2OFQZANCNFSM6AAAAAAVPUQG3Y.
You are receiving this because you authored the thread.Message ID: @.***>
Don't just say because it can't help anything.
You should post your code and all the debug message on serial to see what's wrong in your code.
Please use Github instead of using Email reply.
I’ve posted it on GitHub thanks.
From: Suwatchai @.>
Sent: Sunday, 5 March, 2023 4:02 AM
To: @.>
Cc: MUHAMMAD DANISH BIN @.>; @.>
Subject: Re: [mobizt/Firebase-ESP32] I can't upload my text file to firebase storeage (Issue #267)
Please use Github instead of using Email reply.
—
Reply to this email directly, view it on GitHubhttps://github.com/mobizt/Firebase-ESP32/issues/267#issuecomment-1454858880, or unsubscribehttps://github.com/notifications/unsubscribe-auth/AR4AWROALJYSYVP6PNGZ4ZTW2ONVRANCNFSM6AAAAAAVPUQG3Y.
You are receiving this because you authored the thread.Message ID: @.***>
As I said, you should post your code not snippets and all Serial debug.
Don't waste my time with just little information.
Sorry, I’ve posted the full one.
From: Suwatchai @.>
Sent: Sunday, 5 March, 2023 4:14 AM
To: @.>
Cc: MUHAMMAD DANISH BIN @.>; @.>
Subject: Re: [mobizt/Firebase-ESP32] I can't upload my text file to firebase storeage (Issue #267)
As I said, you should post your code not snippets and all Serial debug.
Don't waste my time with just little information.
—
Reply to this email directly, view it on GitHubhttps://github.com/mobizt/Firebase-ESP32/issues/267#issuecomment-1454862067, or unsubscribehttps://github.com/notifications/unsubscribe-auth/AR4AWRJKUM4EHRCB6EPZTRLW2OPB5ANCNFSM6AAAAAAVPUQG3Y.
You are receiving this because you authored the thread.Message ID: @.***>
The issue will be fixed soon. I will inform you when it is available.
| gharchive/issue | 2023-03-04T16:55:38 | 2025-04-01T06:45:01.758176 | {
"authors": [
"dnshshkr",
"mobizt"
],
"repo": "mobizt/Firebase-ESP32",
"url": "https://github.com/mobizt/Firebase-ESP32/issues/267",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2431153490 | ENHANCEMENT - Add access token support to OTA updates
First of all, thank you very much for actively maintaining this library, even upgrading from the older ESP32/8266 libraries to this new one. I've found it incredibly useful.
Is your feature request related to a problem? Please describe.
Essentially when performing a storage.ota() it works fine if the bucket is publicly available, but with security rules I am unable to download without passing &token={accessToken}. I followed some of the code back to the Storage::sendRequest() function, where it appends options.extras += "?alt=media";. By manually hard coding my token, I was able to successfully perform an OTA update.
Describe the solution you'd like
As far as I know, if a bucket is protected by rules it requires an access token even if the rules would pass otherwise. Should be a pretty simple addition to add an option to pass a token to the storage.ota() which will be passed to the extras as well.
Again, I appreciate all you have done for the open source ESP community, I've found your libraries incredibly helpful.
This library supports Firebase Storage REST API and Google Cloud Storage JSON API.
For the download, the API endpoints is specific to the storage bucket. The public access download URL as you mentioned for OTA which contains the token in its query parameters is totally different and not support.
In conclusion, this library focuses on safety and reliability which works with Firebase Authentication, your request for OTA over provided URL can be implemented separately in user code.
It's a very reliable library in my experience so far. :)
I've modified some of your source code in my own application to take tokens as an input parameter, which has solved the issue for myself. I thought it would be a useful feature for others (and when I update) to request it as a feature.
Maybe it's a firebase config thing I could change, but with my very basic rules of 'allow read: if request.auth != null', even when I am logged in as the example shows, I still get error -114 and see some denies in the console, which goes away with my rudimentary hard-coded token in the query parameters.
I'll check tomorrow, but my understanding was the tokens are not used for authentication, but to make the URL unguessable. Will keep you updated if I find anything more worth mentioning.
Thanks again for the work you've put into this library, much love.
Thank you so much if you like the library.
I read your post again and I wonder you've found the way to privilege download the file object directly and I will think the way how to include it.
With the latest version (v1.3.7), I add the third parameter of FirebaseStorage::Parent class for file access token.
You can try it when it is available in IDE.
Tested in 1.3.7, works like a charm!
Thanks!
| gharchive/issue | 2024-07-26T00:06:26 | 2025-04-01T06:45:01.764649 | {
"authors": [
"Xtremmy",
"mobizt"
],
"repo": "mobizt/FirebaseClient",
"url": "https://github.com/mobizt/FirebaseClient/issues/121",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
520094515 | IPv4 address parse error
For a long time, I've been using vendored ipvs package, recently I updated it to the latest version and it doesn't work anymore since address family is determined incorrectly. I don't have any IPv6 addresses.
Here's output of GetDestinations()
parseIP Error ip=[10 1 1 21 0 0 0 0 0 0 0 0 0 0 0 0]
root cause no destination address family return by IPVS, see PR #2487
| gharchive/issue | 2019-11-08T16:01:02 | 2025-04-01T06:45:01.867189 | {
"authors": [
"ep4eg",
"kwanhur"
],
"repo": "moby/libnetwork",
"url": "https://github.com/moby/libnetwork/issues/2480",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
844608664 | Prevent holding partitions hostage
Allow Script to unmount partition when rootfs.squashfs file is not found.
Need to bump those versions too: https://github.com/mocaccinoOS/kernel-repo/blob/master/packages/initramfs/collection.yaml#L5 and https://github.com/mocaccinoOS/kernel-repo/blob/master/packages/initramfs/collection.yaml#L22
| gharchive/pull-request | 2021-03-30T14:05:34 | 2025-04-01T06:45:01.903824 | {
"authors": [
"jcfrosty",
"mudler"
],
"repo": "mocaccinoOS/kernel-repo",
"url": "https://github.com/mocaccinoOS/kernel-repo/pull/75",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
202903335 | Drop Python 2.6 support
Python 2.6 support shouldn't be supported anymore.
https://github.com/moccu/django-markymark/pull/17 is merged, can this be closed?
Closed with 542f355
| gharchive/issue | 2017-01-24T18:37:42 | 2025-04-01T06:45:01.907732 | {
"authors": [
"hugovk",
"lenarother",
"stephrdev"
],
"repo": "moccu/django-markymark",
"url": "https://github.com/moccu/django-markymark/issues/14",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
41066540 | Don't report timeouts while enableTimeouts=false
Timeouts that occcur while enableTimeouts = false should not be
reported when timeouts are enabled again before calling the callback.
Hi @Rob--W,
I agree with @boneskull that I don't really see the necessity. Should others chime in and ask for the same, I'd be happy to reopen the corresponding issue. Note that the built of this PR also reported errors. so it would need some attention before being potentially merged anyway.
Thanks for the effort though!
| gharchive/pull-request | 2014-08-25T14:30:01 | 2025-04-01T06:45:01.918130 | {
"authors": [
"Rob--W",
"jbnicolai"
],
"repo": "mochajs/mocha",
"url": "https://github.com/mochajs/mocha/pull/1323",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2271982882 | NPE when you forget to call mock inside coVerify/verify
I have a test where I had a coVerify block and actually forgot to put the call to the method and the error you get is just a NullPointerException. You should detect this and provide a better failure message.
I think the same would be true for every/coEvery since the NPE is coming from this line of Mockable.record:
return receiver!! to invocation!!
both values will be null if there are no calls to a mock. The simple solution is to throw a more intelligent exception/assertion in the try block after calling block
| gharchive/issue | 2024-04-30T16:29:45 | 2025-04-01T06:45:01.919548 | {
"authors": [
"dalewking"
],
"repo": "mockative/mockative",
"url": "https://github.com/mockative/mockative/issues/107",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
685423723 | Show progress when compiling user dictionary
It takes very long time when I compile Neologd as a user dictionary.
Showing progress indicator especially for running create_minimum_transducer helps the users to decide to continue or abort compiling.
I will send pull request to solve this issue, however, I don’t have a confidence that this is the best way for this from the view point of architecture.
As for neologd, it is recommended that you use it as the "system dictionary", instead of adding whole entries to user dictionary. Neologd is too large to be added to the user dictionary - whole user dictionary data is loaded on the process space and it wastes large memory when too many entries are added. System dictionary is accessed via mmap, this is well adopted for large dictionaries.
There is procedure to build neologd based janome. See
https://github.com/mocobeta/janome/wiki/(very-experimental)-NEologd-辞書を内包した-janome-をビルドする方法
Also I recently uploaded a custom janome package that was built with the latest neologd dictionary; the google drive link is available from the wiki.
I will look at your PR this weekend, thanks.
Thank you for reply and introducing the way to use Neologd as system dictionary.
I understood that if it takes so long to compile that I need a progress indicator, I should not select a user dictionary.
If there are not any point to discuss, it is okay to close this issue.
Lastly, I'm looking forward to your session in PyCon JP this weekend😃
| gharchive/issue | 2020-08-25T11:54:29 | 2025-04-01T06:45:01.945382 | {
"authors": [
"mocobeta",
"uezo"
],
"repo": "mocobeta/janome",
"url": "https://github.com/mocobeta/janome/issues/86",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
737714915 | I made this error while running the official Python example. What's the matter
epoch train_loss dur
1 3.2377 2.2691
2 2.3052 0.2623
3 2.1487 0.2628
4 1.4069 0.2563
5 1.0567 0.2543
6 0.8704 0.2523
7 0.8014 0.2496
8 0.7548 0.2493
9 0.6871 0.2533
10 0.6194 0.2553
Query no. 1
Traceback (most recent call last):
File "E:/PycharmProjects/pytorch/3另另另mixMatch-master/mixMatch-master/1.py", line 82, in
query_idx, query_instance = learner.query(X_pool, n_instances=100)
File "C:\software\anaconda3\envs\pytorch\lib\site-packages\modAL\models\base.py", line 269, in query
return query_result, retrieve_rows(X_pool, query_result)
File "C:\software\anaconda3\envs\pytorch\lib\site-packages\modAL\utils\data.py", line 101, in retrieve_rows
raise TypeError('%s datatype is not supported' % type(X))
TypeError: <class 'torch.Tensor'> datatype is not supported
@Wan-Dou The data type that is being passed is of Type Tensor, which as of now is not a supported Data type. The supported Data types are:
sp.csr_matrix
pd.DataFrame
np.ndarray
list
I experienced a similar issue when running the MNIST example -- how should it be resolved in the context of the example? It seems that early on the data is read in using the ToTensor() function?
Also, this library looks really promising and I'm excited to use it! Thank you for all your work on this!
Hi!
Sorry for the relatively late answer, @nawabhussain is totally right. Currently, modAL does not support the torch.Tensor datatype. However, this would be a useful feature, so I am opening an issue on this and close this one. Let me know if there is anything else!
| gharchive/issue | 2020-11-06T12:16:50 | 2025-04-01T06:45:01.951344 | {
"authors": [
"Wan-Dou",
"cosmic-cortex",
"michelewang",
"nawabhussain"
],
"repo": "modAL-python/modAL",
"url": "https://github.com/modAL-python/modAL/issues/109",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
146337050 | Move master to 2.x!
Go, go, go!
Move to nonblocking I/O
Drop legacy versions (e.g. sar)
Tomcat 9 support (already wip)
Move to CMAKE (wip)
Separate C native code
Project rename
Protocol versioning
etc
Triggering build using a merge of 169304692fe504314711219aee1312faa5fb6ada on branch master:
Private: https://jenkins.mw.lab.eng.bos.redhat.com/hudson/job/mod_cluster-1.3.x-pull-player-executor/
Build 170 outcome was SUCCESS using a merge of 169304692fe504314711219aee1312faa5fb6ada on branch master:
Private: https://jenkins.mw.lab.eng.bos.redhat.com/hudson/job/mod_cluster-1.3.x-pull-player-executor/170
Public: https://hudson.jboss.org/hudson/job/mod_cluster-1.3.x-pull-player-executor/170
| gharchive/pull-request | 2016-04-06T14:44:26 | 2025-04-01T06:45:01.954917 | {
"authors": [
"modcluster-pull-request",
"rhusar"
],
"repo": "modcluster/mod_cluster",
"url": "https://github.com/modcluster/mod_cluster/pull/173",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2203962072 | Unable to install Libreoffice
Hey devs,
I was planning to install "Libreoffice", but when I tried it isn't installing.
I have attached the screenshot, check that out.
Hoping for your response
Did you added any other distros repo in sources.list?
Wait let me check
Check this out
show output of
cat /etc/apt/sources.list.d/*.list
@bdhackers009 dm me in telegram. Let me try to fix this.
give your telegram I'd name
| gharchive/issue | 2024-03-23T17:00:39 | 2025-04-01T06:45:01.959716 | {
"authors": [
"BDhackers009",
"SUMITDEY69"
],
"repo": "modded-ubuntu/modded-ubuntu",
"url": "https://github.com/modded-ubuntu/modded-ubuntu/issues/145",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1311433330 | Difference in size of generators between rustc LLVM and Kani
In https://github.com/model-checking/kani/pull/1378#discussion_r924977758, it was observed that Kani sometimes reports a different size of a generator than the LLVM backend of rustc. Interestingly, the WASM backend seems to disagree with LLVM as well (see https://github.com/rust-lang/rust/issues/62807).
The example code is the following:
#![feature(generators, generator_trait)]
use std::ops::Generator;
const FOO_SIZE: usize = 1024;
struct Foo([u8; FOO_SIZE]);
impl Drop for Foo {
fn drop(&mut self) {}
}
fn noop() {}
fn move_before_yield_with_noop() -> impl Generator<Yield = (), Return = ()> {
static || {
let first = Foo([0; FOO_SIZE]);
noop();
let _second = first;
yield;
// _second dropped here
}
}
fn main() {
// This fails for Kani: it thinks the size is 1025, and so does the WASM backend.
assert_eq!(1026, std::mem::size_of_val(&move_before_yield_with_noop()));
}
@fzaiser I was taking a look at this issue, and I bumped into the following rust issue: https://github.com/rust-lang/rust/issues/59123. It seems that the panic strategy can impact the generator size.
If I run the example you pasted above with the following command:
rustc +nightly-2022-08-16-x86_64-unknown-linux-gnu size.rs -C panic=abort && ./size
The assertion fails because the size is now 1025:
thread 'main' panicked at 'assertion failed: `(left == right)`
left: `1026`,
right: `1025`', size.rs:26:5
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
Why would you write that specific test though? Isn't it actually a good thing that Kani differs from LLVM? The size of a generator is a property the developer should never rely on
@giltho Kani is supposed to be bit-precise model checker, so we should get the sizes right. It's not something the user should rely on, but internally, Kani should compute the right sizes for offsets etc.
@fzaiser
so we should get the sizes right
I agree, but what is the "right" size? If WASM and LLVM disagree, why would LLVM be "right" and WASM be "wrong"?
The test has been disabled for WASM, I don't think it's a bad thing to disable it for Kani too
@giltho It looks like the actual difference is not LLVM vs WASM but "panic=unwind" vs "panic=abort". So we may have to update the tests. I'll update the title when I know for sure what the reason is.
| gharchive/issue | 2022-07-20T16:14:27 | 2025-04-01T06:45:01.965454 | {
"authors": [
"celinval",
"fzaiser",
"giltho"
],
"repo": "model-checking/kani",
"url": "https://github.com/model-checking/kani/issues/1395",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
937225049 | Failed to compile "syn" dependency
I tried RMC on the virtio devices code:
https://github.com/firecracker-microvm/firecracker/tree/main/src/devices/src/virtio
using the following command line invocation as per PR #264:
RUST_BACKTRACE=1 RUSTFLAGS="-Z trim-diagnostic-paths=no -Z codegen-backend=gotoc --cfg=rmc" RUSTC=rmc-rustc cargo build --target x86_64-unknown-linux-gnu -j1
with RMC version: main-153-2021-07-02
I expected the code to compile succesfully.
Instead, this happened:
error[E0460]: found possibly newer version of crate `std` which `proc_macro2` depends on
--> /home/ANT.AMAZON.COM/sandreim/.cargo/registry/src/github.com-1ecc6299db9ec823/syn-1.0.55/src/lib.rs:304:1
|
304 | extern crate proc_macro2;
| ^^^^^^^^^^^^^^^^^^^^^^^^^
|
= note: perhaps that crate needs to be recompiled?
= note: the following crate versions were found:
crate `std`: /home/ANT.AMAZON.COM/sandreim/rmc/build/x86_64-unknown-linux-gnu/stage1/lib/rustlib/x86_64-unknown-linux-gnu/lib/libstd-5fb2fb5733ae0bb1.rlib
crate `std`: /home/ANT.AMAZON.COM/sandreim/rmc/build/x86_64-unknown-linux-gnu/stage1/lib/rustlib/x86_64-unknown-linux-gnu/lib/libstd-5fb2fb5733ae0bb1.so
crate `proc_macro2`: /home/ANT.AMAZON.COM/sandreim/firecracker/build/cargo_target/debug/deps/libproc_macro2-de0f0702f4c2e4d3.rmeta
Hi @sandreim , thank you for the bug report!
I have not been able to reproduce this error in an Ubuntu instance. The whole crate compiles successfully for me:
ubuntu@ip-172-31-26-129:~/firecracker-2/src/devices/src/virtio - main $ RUST_BACKTRACE=1 RUSTFLAGS="-Z trim-diagnostic-paths=no -Z codegen-backend=gotoc --cfg=rmc" RUSTC=rmc-rustc cargo build --target x86_64-unknown-linux-gnu -j1
Compiling proc-macro2 v1.0.24
Compiling libc v0.2.81
Compiling unicode-xid v0.2.1
Compiling syn v1.0.55
Compiling bitflags v1.2.1
Compiling serde_derive v1.0.118
Compiling serde v1.0.118
Compiling byteorder v1.3.4
Compiling ryu v1.0.5
Compiling serde_json v1.0.60
Compiling log v0.4.11
Compiling cfg-if v0.1.10
Compiling net_gen v0.1.0 (/home/ubuntu/firecracker-2/src/net_gen)
Compiling itoa v0.4.6
Compiling vm-superio v0.3.0
Compiling lazy_static v1.4.0
Compiling crc64 v1.0.0
Compiling virtio_gen v0.1.0 (/home/ubuntu/firecracker-2/src/virtio_gen)
Compiling quote v1.0.8
Compiling timerfd v1.2.0
Compiling vm-memory v0.4.0
Compiling vmm-sys-util v0.8.0
Compiling micro_http v0.1.0 (https://github.com/firecracker-microvm/micro-http?rev=49240ce#49240ce1)
Compiling event-manager v0.2.1
Compiling vm-memory v0.1.0 (/home/ubuntu/firecracker-2/src/vm-memory)
Compiling versionize_derive v0.1.3
Compiling bincode v1.3.1
Compiling utils v0.1.0 (/home/ubuntu/firecracker-2/src/utils)
Compiling versionize v0.1.6
Compiling logger v0.1.0 (/home/ubuntu/firecracker-2/src/logger)
Compiling snapshot v0.1.0 (/home/ubuntu/firecracker-2/src/snapshot)
Compiling dumbo v0.1.0 (/home/ubuntu/firecracker-2/src/dumbo)
Compiling rate_limiter v0.1.0 (/home/ubuntu/firecracker-2/src/rate_limiter)
Compiling mmds v0.1.0 (/home/ubuntu/firecracker-2/src/mmds)
Compiling devices v0.1.0 (/home/ubuntu/firecracker-2/src/devices)
warning: field is never read: `read_tap`
--> src/devices/src/virtio/net/test_utils.rs:66:5
|
66 | pub(crate) read_tap: ReadTapMock,
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
= note: `#[warn(dead_code)]` on by default
warning: 1 warning emitted
Finished dev [unoptimized + debuginfo] target(s) in 2m 13s
I believe the setup may have something to do with this error. Can you share some details about the setup (OS, rust versions, etc.) where you are running this?
Tried with 0c751a9674f, and we're able to codegen the whole crate:
$ git clone https://github.com/firecracker-microvm/firecracker
$ cd firecracker/src/devices/src
$ cargo kani --only-codegen
<snip>
Finished dev [unoptimized + debuginfo] target(s) in 51.20s
Closing this issue.
| gharchive/issue | 2021-07-05T16:11:07 | 2025-04-01T06:45:01.970117 | {
"authors": [
"adpaco-aws",
"sandreim",
"zhassan-aws"
],
"repo": "model-checking/kani",
"url": "https://github.com/model-checking/kani/issues/288",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2532138614 | List Subcommand (Implementation)
Implementation of the list subcommand (and updates to the RFC).
As a larger test, I ran on the standard library (kani list -Z list -Z function-contracts -Z mem-predicates ./library --std) and manually verified that the results were correct. I pasted the output below. mem::swap only has modifies clauses, so we list zero contracts (see the "Modifies Clauses" section of the RFC for rationale).
Contracts:
Function
Contract Harnesses (#[kani::proof_for_contract])
alloc::layout::Layout::from_size_align_unchecked
alloc::layout::verify::check_from_size_align_unchecked
ascii::ascii_char::AsciiChar::from_u8
ascii::ascii_char::verify::check_from_u8
ascii::ascii_char::AsciiChar::from_u8_unchecked
ascii::ascii_char::verify::check_from_u8_unchecked
char::convert::from_u32_unchecked
char::convert::verify::check_from_u32_unchecked
char::methods::verify::as_ascii_clone
char::methods::verify::check_as_ascii_ascii_char
char::methods::verify::check_as_ascii_non_ascii_char
intrinsics::typed_swap
intrinsics::verify::check_typed_swap_u8
intrinsics::verify::check_typed_swap_char
intrinsics::verify::check_typed_swap_non_zero
mem::swap
mem::verify::check_swap_primitive
mem::verify::check_swap_adt_no_drop
ptr::align_offset
ptr::verify::check_align_offset_zst
ptr::verify::check_align_offset_u8
ptr::verify::check_align_offset_u16
ptr::verify::check_align_offset_u32
ptr::verify::check_align_offset_u64
ptr::verify::check_align_offset_u128
ptr::verify::check_align_offset_4096
ptr::verify::check_align_offset_5
ptr::alignment::Alignment::as_nonzero
ptr::alignment::verify::check_as_nonzero
ptr::alignment::Alignment::as_usize
ptr::alignment::verify::check_as_usize
ptr::alignment::Alignment::log2
ptr::alignment::verify::check_log2
ptr::alignment::Alignment::mask
ptr::alignment::verify::check_mask
ptr::alignment::Alignment::new
ptr::alignment::verify::check_new
ptr::alignment::Alignment::new_unchecked
ptr::alignment::verify::check_new_unchecked
ptr::alignment::Alignment::of
ptr::alignment::verify::check_of_i32
ptr::non_null::NonNull::::new
ptr::non_null::verify::non_null_check_new
ptr::non_null::NonNull::::new_unchecked
ptr::non_null::verify::non_null_check_new_unchecked
ptr::read_volatile
ptr::verify::check_read_u128
ptr::unique::Unique::::as_non_null_ptr
ptr::unique::verify::check_as_non_null_ptr
ptr::unique::Unique::::as_ptr
ptr::unique::verify::check_as_ptr
ptr::unique::Unique::::new
ptr::unique::verify::check_new
ptr::unique::Unique::::new_unchecked
ptr::unique::verify::check_new_unchecked
ptr::verify::mod_inv_copy
ptr::verify::check_mod_inv
ptr::write_volatile
NONE
Total
24
34
Standard Harnesses (#[kani::proof]):
ptr::unique::verify::check_as_mut
ptr::unique::verify::check_as_ref
ptr::unique::verify::check_cast
Terminal view (--pretty format):
By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 and MIT licenses.
What is # of contracts? Why does mem::swap has 0?
What is # of contracts? Why does mem::swap has 0?
of contracts is the number of contracts applied to the function. See the PR description for the explanation for mem::swap--it's because we don't currently count modifies clauses as contracts.
@celinval Thanks for the feedback -- I removed contracts count and simplified the RFC. Also per offline discussion, I stabilized the gen_contracts_metadata function. It still uses unstable DefIds, since in this line:
else if let Some((target_name, target_def_id, _)) = attributes.interpret_for_contract_attribute()
target_def_id is an unstable DefId and AFAIK there's no way to convert an unstable DefId to a stable one. LMK if I'm mistaken and I'll fix it.
@celinval Thanks for the feedback -- I removed contracts count and simplified the RFC. Also per offline discussion, I stabilized the gen_contracts_metadata function. It still uses unstable DefIds, since in this line:
else if let Some((target_name, target_def_id, _)) = attributes.interpret_for_contract_attribute()
target_def_id is an unstable DefId and AFAIK there's no way to convert an unstable DefId to a stable one. LMK if I'm mistaken and I'll fix it.
You could use this function to get the FnDef: https://github.com/model-checking/kani/blob/68387e4cf462541028892772d2fc12da8abd6a6c/kani-compiler/src/kani_middle/codegen_units.rs#L118
| gharchive/pull-request | 2024-09-17T20:53:41 | 2025-04-01T06:45:01.997938 | {
"authors": [
"carolynzech",
"celinval"
],
"repo": "model-checking/kani",
"url": "https://github.com/model-checking/kani/pull/3523",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
200749018 | Incorrect description of diagonalizable matrices in eigenValues-routine
Reported by HansOlsson on 17 Apr 2009 12:52 UTC
Modelica.Math.Matrices.eigenValues
incorrectly state that:
---
With function Matrices.eigenValueMatrix, a real block diagonal matrix is constructed from the eigenvalues such that
A = eigenvectors * eigenValueMatrix(eigenvalues) * inv(eigenvectors)
provided the eigenvector matrix "eigenvectors" can be inverted (an inversion is possible, if all eigenvalues are different and no eigenvalue is zero).
--
The following text at the end should be removed:
"and no eigenvalue is zero".
This is diagonalizing, and a matrix can always be diagonalized if the eigenvalues are different (regardless of whether is zero or not). If some eigenvalues are the same the matrix can sometimes be diagonalized - but the text uses "if" and not "if and only if" so that should be ok.
Migrated-From: https://trac.modelica.org/Modelica/ticket/162
Comment by dietmarw on 5 Jun 2009 08:35 UTC
from [milestone:Design62 62nd Modelica Design Meeting]:
Status: Check text proposal
Problem: Suggestion to correct text in the specification
Comment by dietmarw on 17 Jun 2009 06:46 UTC
Scheduled for MSL3.1
Comment by otter on 21 Jun 2009 08:45 UTC
Fixed in 5c10346e3eee255fa4bb689cd4d8a3f7d010a543
| gharchive/issue | 2017-01-13T22:36:13 | 2025-04-01T06:45:02.004528 | {
"authors": [
"modelica-trac-importer"
],
"repo": "modelica/Modelica",
"url": "https://github.com/modelica/Modelica/issues/162",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
2506066220 | Training stops for KTO after model loads into memory.
Describe the bug
What the bug is, and how to reproduce, better with screenshots(描述bug以及复现过程,最好有截图)
The process stops after loading the model into memory and processing dataset. I also tried another dataset that worked before (15-25 days ago) but it's not working now. this same configuration worked 15-25 days ago.
I also tried using trl==0.9.6 having but same issues. I also tried switching servers between different vendors and using H100s instead of A100s.
Training arguments:
USE_HF=1 \
CUDA_VISIBLE_DEVICES=0,1 \
swift rlhf \
--rlhf_type kto \
--model_type llama3-70b-instruct \
--model_id_or_path ~/models/llama3-70b-instruct \
--beta 0.1 \
--desirable_weight 1.0 \
--undesirable_weight 1.0 \
--model_revision master \
--sft_type lora \
--tuner_backend peft \
--template_type AUTO \
--dtype AUTO \
--output_dir output \
--dataset ~/rlhf/stage3/small_refusals_kto.jsonl \
--num_train_epochs 1 \
--max_length 8192 \
--check_dataset_strategy warning \
--lora_rank 32 \
--lora_alpha 64 \
--lora_dropout_p 0.00 \
--lora_target_modules DEFAULT \
--gradient_checkpointing true \
--batch_size 1 \
--weight_decay 0.0 \
--learning_rate 2e-4 \
--gradient_accumulation_steps 2 \
--max_grad_norm 0.5 \
--warmup_ratio 0.03 \
--eval_steps 100 \
--save_steps 100 \
--save_total_limit 2 \
--logging_steps 10 \
--use_flash_attn true
Logs:
Had to use pastebin because of github issue body limit.
Pastebin.
Your hardware and system info
Write your system info like CUDA version/system/GPU/torch version here(在这里给出硬件信息和系统信息,如CUDA版本,系统,GPU型号和torch版本等)
$ nvcc -V
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2024 NVIDIA Corporation
Built on Thu_Mar_28_02:18:24_PDT_2024
Cuda compilation tools, release 12.4, V12.4.131
Build cuda_12.4.r12.4/compiler.34097967_0
GPUs: 2xA100 from Massed Compute
Additional context
Add any other context about the problem here(在这里补充其他信息)
Seems a sudden death, I do think there is a memory problem. Can you please observe the memory usage when running this training?
I had the same config with same dataset and model which worked. But i will check
Here is my gpu usage at the crash point:
GPU 1: 71971 MiB
GPU 2: 71975 MiB
After the crash both become 1 MiB.
Before the crash, the memory kept filling up while loading the model.
it also happens when quantized to 8 bit.
GPU 1: 35301 MiB
GPU 2: 43507 MiB
there is ~77GB free memory.
Okay so the the training works as expected on Azure servers but i these issues on TensorDock Massed Compute. All the servers had 2xA100 80GB.
| gharchive/issue | 2024-09-04T18:55:05 | 2025-04-01T06:45:02.018810 | {
"authors": [
"Aunali321",
"tastelikefeet"
],
"repo": "modelscope/ms-swift",
"url": "https://github.com/modelscope/ms-swift/issues/1938",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1486156274 | 🛑 Portainer is down
In 30890c8, Portainer ($PORTAINER_URL) was down:
HTTP code: 503
Response time: 117 ms
Resolved: Portainer is back up in 989f9a5.
| gharchive/issue | 2022-12-09T06:50:25 | 2025-04-01T06:45:02.021445 | {
"authors": [
"modem7"
],
"repo": "modem7/Status",
"url": "https://github.com/modem7/Status/issues/2130",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1847775258 | 🛑 Prowlarr is down
In 16fc818, Prowlarr ($PROWLARR_URL) was down:
HTTP code: 523
Response time: 676 ms
Resolved: Prowlarr is back up in 3d7f2c0.
| gharchive/issue | 2023-08-12T05:43:08 | 2025-04-01T06:45:02.023589 | {
"authors": [
"modem7"
],
"repo": "modem7/Status",
"url": "https://github.com/modem7/Status/issues/3857",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1090295795 | 🛑 XBackbone is down
In d59a69d, XBackbone ($XBACKBONE_URL) was down:
HTTP code: 404
Response time: 457 ms
Resolved: XBackbone is back up in 4ef737f.
| gharchive/issue | 2021-12-29T06:17:10 | 2025-04-01T06:45:02.025915 | {
"authors": [
"modem7"
],
"repo": "modem7/Status",
"url": "https://github.com/modem7/Status/issues/444",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1301875622 | doc: avoid prepare failure
when running pnpm install, always occur prepare failure which cause application can not work
Thank you for your submission! We really appreciate it. Like many open source projects, we ask that you sign our Contributor License Agreement before we can accept your contribution.聂焱 seems not to be a GitHub user. You need a GitHub account to be able to sign the CLA. If you have already a GitHub account, please add the email address used for this commit to your account.You have signed the CLA already but the status is still pending? Let us recheck it.
| gharchive/pull-request | 2022-07-12T10:41:41 | 2025-04-01T06:45:02.028728 | {
"authors": [
"CLAassistant",
"LittleMoonkk"
],
"repo": "modern-js-dev/modern.js",
"url": "https://github.com/modern-js-dev/modern.js/pull/1357",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
170067738 | Is google maps key no longer needed?
switched to Leaflet for live map engine
so we no longer need
GOOGLE_MAPS_KEY = 's3cr3t'
in config, right?
Nah, doesn't use Google Maps anymore.
I think the heatmaps are still using google maps. I get an error when I try to run them and when I pop open the web dev console I get this 'Google Maps API error: InvalidKeyMapError https://developers.google.com/maps/documentation/javascript/error-messages#invalid-key-map-error '
Google maps is used by the report feature
Ohhh, I see.
Reports will be moved to Leaflet too. It has a few plugins for heatmaps, too.
| gharchive/issue | 2016-08-09T02:23:50 | 2025-04-01T06:45:02.080180 | {
"authors": [
"DavidSelem",
"YonderGod",
"grenadesonfire",
"kravock",
"modrzew"
],
"repo": "modrzew/pokeminer",
"url": "https://github.com/modrzew/pokeminer/issues/173",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
373243054 | Ssoju x2
a simple cnn model push
하나만 푸쉬하더라도 다른 게 계속 뜨네요 ㅠㅠ
.
| gharchive/pull-request | 2018-10-23T23:26:50 | 2025-04-01T06:45:02.081117 | {
"authors": [
"Ssojux2",
"ilguyi"
],
"repo": "modulabs/modu-tensorflow-v2",
"url": "https://github.com/modulabs/modu-tensorflow-v2/pull/11",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
205246853 | status command should say no status to report if none found.
This would help with beginners. Kind of like 'this page intensionally left blank'.
done.
| gharchive/issue | 2017-02-03T19:25:47 | 2025-04-01T06:45:02.092692 | {
"authors": [
"bsneed",
"woolie"
],
"repo": "modulo-dm/modulo",
"url": "https://github.com/modulo-dm/modulo/issues/2",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
2645724471 | analog clock weather combo?!
There's an option "rain on analog clock". How does this work?
And what about the opened up circle around the analog clock?
That circle is the rain prognosis. It will show, for the next ten hours (IIRC), a coloured line on the analog clock on those hours (or quarter hours really) where the prognosis says that there will likely be rain. It's a visualisation of what the weather servers says about rain, in 15-minute steps, for the next ten hours.
| gharchive/issue | 2024-11-09T06:51:41 | 2025-04-01T06:45:02.100744 | {
"authors": [
"bobblkabb",
"moehriegitt"
],
"repo": "moehriegitt/NeatLauncher",
"url": "https://github.com/moehriegitt/NeatLauncher/issues/11",
"license": "BSD-2-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
1571921518 | 🛑 STATUS.MOFE.IO is down
In 834f56c, STATUS.MOFE.IO (https://status.mofe.io) was down:
HTTP code: 0
Response time: 0 ms
Resolved: STATUS.MOFE.IO is back up in 7334b53.
| gharchive/issue | 2023-02-06T05:04:28 | 2025-04-01T06:45:02.105507 | {
"authors": [
"mofelee"
],
"repo": "mofelee/upptime",
"url": "https://github.com/mofelee/upptime/issues/1200",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1324867921 | 🛑 STATUS.MOFE.IO is down
In 98d0008, STATUS.MOFE.IO (https://status.mofe.io) was down:
HTTP code: 0
Response time: 0 ms
Resolved: STATUS.MOFE.IO is back up in 959cbec.
| gharchive/issue | 2022-08-01T19:22:23 | 2025-04-01T06:45:02.108626 | {
"authors": [
"mofelee"
],
"repo": "mofelee/upptime",
"url": "https://github.com/mofelee/upptime/issues/596",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
805616509 | Migrate library to null safety
Fixes #13
Properties desiredItemWidth, minSpacing and children of ResponsiveGridList are now required (because if nothing was passed then error was thrown)
Would probably be better to create a separate branch for nnbd and merge to master once null safety is in stable.
Can we get a version with this published?
@mohamed-selim-a @ms-facegraph
Seems like the authors were not active for almost a year.
The solution would be to publish a forked null safe package of this repository.
| gharchive/pull-request | 2021-02-10T15:27:38 | 2025-04-01T06:45:02.113552 | {
"authors": [
"JohnGalt1717",
"mzdm"
],
"repo": "mohamed-selim-a/ResponsiveGrid_Flutter",
"url": "https://github.com/mohamed-selim-a/ResponsiveGrid_Flutter/pull/14",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.