id
stringlengths
4
10
text
stringlengths
4
2.14M
source
stringclasses
2 values
created
timestamp[s]date
2001-05-16 21:05:09
2025-01-01 03:38:30
added
stringdate
2025-04-01 04:05:38
2025-04-01 07:14:06
metadata
dict
747889767
Feature/address docker issues Added documentation to address comments presented in Issue #16 Thanks for making the changes. Looks great!
gharchive/pull-request
2020-11-21T01:13:07
2025-04-01T06:45:39.529339
{ "authors": [ "droter", "vinnnyr" ], "repo": "ros-agriculture/lawn_tractor", "url": "https://github.com/ros-agriculture/lawn_tractor/pull/20", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2261299013
[CI] Specify runner/container images Necessary changes to make https://github.com/ros-controls/ros2_control_ci/pull/53 work for noble. @mergifyio backport humble
gharchive/pull-request
2024-04-24T13:24:32
2025-04-01T06:45:39.536531
{ "authors": [ "christophfroehlich" ], "repo": "ros-controls/kinematics_interface", "url": "https://github.com/ros-controls/kinematics_interface/pull/69", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
371162858
URe Support Added support for URE series. Tested on UR5E 5.0.0 and UR5 3.5.3. Requires https://github.com/ros-industrial/universal_robot/pull/380 (approved but not merged yet) As a note to those testing, in the URE, you have to enable Remote Control (Menu > Settings > System > Remote Control > Enable) then switch to Remote Control with the new button in the top right were you able to connect to a real robot? I keep getting connection refused. is there a different ur_bringup launch file for the e-series? I dont see one. @geg58 I tested with a real UR5 and UR5E. To do the initial connection and monitoring, you shouldn't need to enable the Remote Control, that is only if you are commanding motion. Ensure your IP and Subnet are set correctly in your computer and on the UR controller. @dniewinski Hello, I have a real UR5e robot. Do you have coded a ROS driver for UR5e? Thanks for your help. @happygaoxiao This PR is to add that support. Clone my fork and the ur_e branch from my fork of universal_robot and it should work @happygaoxiao This PR is to add that support. Clone my fork and the ur_e branch from my fork of universal_robot and it should work I got both of these going, but had to run the ur_e_description as well in order for it to all work. gazebo simulation wasnt able to update from joint states in rviz for me however - I'll try to debug and post an update. @dniewinski Hello, I seem not to understand what you said. Do you mean you test in the rivz for path planning and in the gazebo for simulation?. Have you ever tested your driver in the off-line simulator from UR Sim or the real robot? I want to control the robot in real-time 500Hz at the remote control and get the robot state(such as joint states). I have downloaded your package at the ur_e branch. But the data unpacking operation was written several years ago, which is different from the new robot in Client_Interface_V3.7andV5.1.xlsx. @geg58, I launched these .launch files but not topic is relate to robot state in the offline simulator. How did you connect to the offline simulator? Maybe I do something wrong for my poor programming skills. I am unable to stream correct joint_states with our ur10e. launch file: <launch> <arg name="robot_ip" default="192.168.32.29" /> <include file="$(find ur_modern_driver)/launch/ur10e_bringup_joint_limited.launch"> <arg name="robot_ip" value="$(arg robot_ip)" /> </include> </launch> ur_modern_driver output: NODES / robot_state_publisher (robot_state_publisher/robot_state_publisher) ur_driver (ur_modern_driver/ur_driver) auto-starting new master process[master]: started with pid [24674] ROS_MASTER_URI=http://localhost:11311 setting /run_id to 1f5247dc-d3e8-11e8-9e5d-d46a6a72bdd1 process[rosout-1]: started with pid [24687] started core service [/rosout] process[robot_state_publisher-2]: started with pid [24703] process[ur_driver-3]: started with pid [24705] [ INFO] [1539985455.633114555]: Setting up connection: 192.168.32.29:30003 [ INFO] [1539985455.634013892]: Connection established for 192.168.32.29:30003 [ INFO] [1539985455.634048341]: Setting up connection: 192.168.32.29:30001 [ INFO] [1539985455.634406843]: Connection established for 192.168.32.29:30001 [ INFO] [1539985456.122771086]: Got VersionMessage: [ INFO] [1539985456.122817956]: project name: URControl [ INFO] [1539985456.122839889]: version: 5.1.0 [ INFO] [1539985456.122858795]: build date: 13-08-2018, 06:37:36 [ INFO] [1539985456.122878622]: Disconnecting from 192.168.32.29:30001 [ INFO] [1539985456.145725981]: ActionServer enabled [ INFO] [1539985456.145839812]: Use standard trajectory follower [ INFO] [1539985456.150687329]: Setting up connection: :50001 [ INFO] [1539985456.151025525]: Connection established for :50001 [ INFO] [1539985456.151250655]: Initializing ur_driver/URScript subscriber [ INFO] [1539985456.162271526]: The ur_driver/URScript initialized [ INFO] [1539985456.162372190]: Notifier: Pipeline disconnect will shutdown the node [ INFO] [1539985456.171320044]: Service 'ur_driver/robot_enable' activation mode: Never [ INFO] [1539985456.171431378]: Starting main loop [ INFO] [1539985456.171678158]: Setting up connection: 192.168.32.29:30003 [ INFO] [1539985456.171767861]: Starting pipeline RTPacket [ INFO] [1539985456.171947456]: Starting pipeline StatePacket [ INFO] [1539985456.172081742]: Setting up connection: 192.168.32.29:30002 [ INFO] [1539985456.173161873]: Connection established for 192.168.32.29:30002 [ INFO] [1539985456.173345630]: Connection established for 192.168.32.29:30003 [ INFO] [1539985456.175836298]: Starting ActionServer [ INFO] [1539985456.176157517]: Trajectory thread started rostopic echo /joint_states: header: seq: 4790 stamp: secs: 1539985391 nsecs: 114175471 frame_id: '' name: [shoulder_pan_joint, shoulder_lift_joint, elbow_joint, wrist_1_joint, wrist_2_joint, wrist_3_joint] position: [0.0, 0.0, 0.0, 0.0, 0.0, 0.0] velocity: [0.0, 0.0, 0.0, 0.0, 0.0, 0.0] effort: [0.0, 0.0, 0.0, 0.0, 0.0, 0.0] remote control enabled or disabled doesn't seem to have any effect. I would love to get our e-series working so anything i can so, please let me know! @AustinDeric If you are seeing all zeroes for joint angles, but no errors in your terminal, it likely means the arm itself hasn't been started. In the new version of the Polyscope software, it doesn't seem to make you start the arm by default when the controller starts. You should see a green circle in the bottom-left of the UR screen. If you don't, press the circle and go through the steps to start and calibrate the arm and those joint angles should come up. Once that is done, see my comment above about enabling remote control so you can move the arm, not just monitor it @happygaoxiao This driver works for connecting to a real robot arm, and that is what I used for testing. It should also work the same when running it with the URCap SDK simulator, but I didn't test this. If you are running the Gazebo simulation, you don't need to use this driver @dniewinski Thanks for your reply. Could you tell me how to use your driver? There is no URe .launch file. I test the ur5_bringup.launch but get some errors: Traceback (most recent call last): File "/usr/lib/python2.7/threading.py", line 810, in __bootstrap_inner self.run() File "/usr/lib/python2.7/threading.py", line 763, in run self.__target(*self.__args, **self.__kwargs) File "/home/gx/catkin_ws/src/universal_robot-ur_e/ur_driver/src/ur_driver/driver.py", line 274, in __run self.__on_packet(packet) File "/home/gx/catkin_ws/src/universal_robot-ur_e/ur_driver/src/ur_driver/driver.py", line 186, in __on_packet state = RobotState.unpack(buf) File "/home/gx/catkin_ws/src/universal_robot-ur_e/ur_driver/src/ur_driver/deserialize.py", line 324, in unpack raise Exception("Fatal error when unpacking RobotState packet") Exception: Fatal error when unpacking RobotState packet For the testRT_comm.py, it shows: RobotStateRT has wrong length: 1108. So could you show me how to bring up the UR e-Series robot? @dniewinski Thanks for your reply. Could you tell me how to use your driver? There is no URe .launch file. Are you sure you are using the kinetic-devel branch of @dniewinski's fork? That is where the e-series changes are in. And there are e-series launch files as well, see here. But they don't magically change the driver, they wouldn't even really be needed. @happygaoxiao I'm glad to hear that the driver is working with 5.1! I would recommend using Kinetic with the current version of the driver. This PR is aimed at adding support for the latest UR software. Unfortunately, I think the tracking performance is a little outside of the scope of this PR and should likely be handled elsewhere. @gavanderhoorn ? @dniewinski Could your please tell me what can I do for this problem ? I have no idea about this. (1) When I run roslaunch ur_modern_driver ur5e_ros_control.launch , the robot moves at a very high speed to joint_states.position = [0,0,0,0,0,0]. It's so scaring. In addition, I will improve the PID parameters or use the servoj script to design a feedback controller, for a better tracking performance. @happygaoxiao you may be hitting the issue described in #206. Can you try the fix proposed in #213? I successfully was able to pass a trajectory to the UR10e using moveit. The robot executed the trajectory, but the result was not as expected. The robot overshoots the goal. actual video: https://youtu.be/7hVw656VKXw rviz: https://youtu.be/9qvoe9U74yY Here is the log from rostopic echo /move_group/goal https://gist.github.com/AustinDeric/1d316001b601bfcaaadf36ad46be592e Here is the log from rostopic echo /follow_joint_trajectory/goal: https://gist.github.com/AustinDeric/03f9125c0c11f20c1e17e1efd21308b1 To recreate: in terminal 1: roslaunch ur_modern_driver ur10e_bringup.launch robot_ip:=192.168.32.29 in terminal 2: roslaunch ur10_e_moveit_config ur10_e_moveit_planning_execution.launch in terminal 3: roslaunch ur10_e_moveit_config moveit_rviz.launch After an execution, if replan and then execute, i will get the following error: Invalid Trajectory: start point deviates from current robot state more than 0.01 joint 'elbow_joint': expected: 0.567835, current: -0.622903 but it may be nothing I got both of these going, but had to run the ur_e_description as well in order for it to all work. Sim works great, have to have 4 terminals: 1 for ur_e_description, 1 for ur_e_gazebo, 1 for ur_e_movit_config, 1 for moveit rviz. Awesome @dniewinski thanks for getting this all going Hi @geg58, I'm trying to follow what you've done and running into a bit of a problem. I have the following things going: ur_e_gazebo ur5e.launch ur5_e_moveit_config ur5_e_moveit_planning_execution.launch sim:=true ur5_e_moveit_config moveit_rviz.launch When I try to run ur_e_description ur5e_upload.launch, it says that there are no processes to monitor and stops. However, it doesn't seem related to my error: In Rviz, I can plan a movement, but it fails when attempting to execute. In the console for ur5_e_moveit_planning_execution.launch, I see: [ERROR] [1540860321.447192390, 2083.033000000]: Unable to identify any set of controllers that can actuate the specified joints: [ elbow_joint shoulder_lift_joint shoulder_pan_joint wrist_1_joint wrist_2_joint wrist_3_joint ] [ERROR] [1540860321.447275983, 2083.033000000]: Known controllers and their joints: In the startup sequence, there are the following errors/warnings: [ WARN] [1540858218.000130178, 26.131000000]: Waiting for /follow_joint_trajectory to come up [ WARN] [1540858224.009196790, 32.131000000]: Waiting for /follow_joint_trajectory to come up [ERROR] [1540858230.018252366, 38.131000000]: Action client not connected: /follow_joint_trajectory [ INFO] [1540858230.095139068, 38.208000000]: Returned 0 controllers in list [ INFO] [1540858230.112521862, 38.226000000]: Trajectory execution is managing controllers [ERROR] [1540858230.144966664, 38.258000000]: Exception while loading move_group capability 'move_group/MoveGroupExecuteService': MultiLibraryClassLoader: Could not create class of type move_group::MoveGroupExecuteService Available capabilities: move_group/ApplyPlanningSceneService, move_group/ClearOctomapService, move_group/MoveGroupCartesianPathService, move_group/MoveGroupExecuteService, move_group/MoveGroupExecuteTrajectoryAction, move_group/MoveGroupGetPlanningSceneService, move_group/MoveGroupKinematicsService, move_group/MoveGroupMoveAction, move_group/MoveGroupPickPlaceAction, move_group/MoveGroupPlanService, move_group/MoveGroupQueryPlannersService, move_group/MoveGroupStateValidationService [ INFO] [1540860316.795162190, 2078.414000000]: Didn't received robot state (joint angles) with recent timestamp within 1 seconds. Check clock synchronization if your are running ROS across multiple machines! [ WARN] [1540860316.795212990, 2078.414000000]: Failed to fetch current robot state. Would you (or anyone) be able to point me in the right direction? Thanks! Hello @AustinDeric,your video is a little strange. Have you check your trajectory? I have used dniewinski's driver and it works fine. I have launched the ur5e_bringup.launch and ur5_e_moveit_planning_execution.launch for an eye-hand calibration. It moves smooth and exactly to the goal. @brianzhang-git Have you installed the joint-coontrollers? @AustinDeric Might sound like a silly question, but on the UR controller, is the motion speed slider in the bottom of the UR interface set to 100%? @dniewinski i set the speed to 10% for safety reasons. @happygaoxiao good to know, thanks! @AustinDeric I can see that being the issue. Try increasing the controller speed to 100% and see if it fixes the issue. If it does, you can limit the speed of the motion in the robot's description and MoveIt! config. @dniewinski making the velocity and acceleration scaling factor 0.2 worked perfectly. +1 from me, tested and working. Hello! Running into an error connecting to a UR5e via roslaunch ur_modern_driver ur5e_bringup.launch that's identical to issue #136. According to that thread, it seems that it's issues with the binary stream and the deserializer on URSoftware's side and is sometimes fixed by updating. I've run on versions 5.1.1.X and 5.2.0.X. Has anyone tried on these versions? It seems like the @dniewinski has been running on 5.0.X and @AustinDeric on 5.1.0.X. My next step is to try re-imaging back to 5.1.0.X since its the earliest available. @dniewinski Hello! I have the same problem with @happygaoxiao. And when I run roslaunch ur_modern_driver ur5e_ros_control.launch , the robot moves at a very high speed to joint_states.position = [0,0,0,0,0,0]. Do you have any idea about it? Thanks a lot! Rebased Hello, I have testing the ur_modern_driver which download from this link. I want to run the real world UR5e robotics arm with my Ubuntu 16.04 ros Kinetic framework. This is the command I run in different terminal. >>roslaunch ur_description ur5_upload.launch >>roslaunch ur_modern_driver ur5e_bringup.launch robot_ip:=192.168.1.150 >>roslaunch ur_modern_driver ur5e_bringup.launch robot_ip:=192.168.1.150 [ INFO] [1551948471.412181675]: Loading robot model 'ur5e'... [ WARN] [1551948471.412244966]: Skipping virtual joint 'fixed_base' because its child frame 'base_link' does not match the URDF frame 'world' [ INFO] [1551948471.412261449]: No root/virtual joint specified in SRDF. Assuming fixed joint [ INFO] [1551948471.887345150]: Loading robot model 'ur5e'... [ WARN] [1551948471.887415949]: Skipping virtual joint 'fixed_base' because its child frame 'base_link' does not match the URDF frame 'world' [ INFO] [1551948471.887450574]: No root/virtual joint specified in SRDF. Assuming fixed joint [ INFO] [1551948472.026621472]: Publishing maintained planning scene on 'monitored_planning_scene' >>roslaunch ur5_moveit_config moveit_rviz.launch config:=true [ERROR] [1551949536.231887280]: No robot state or robot model loaded [ INFO] [1551949536.236069906]: Loading robot model 'ur5e'... [ WARN] [1551949536.236114993]: Skipping virtual joint 'fixed_base' because its child frame 'base_link' does not match the URDF frame 'world' [ INFO] [1551949536.236128787]: No root/virtual joint specified in SRDF. Assuming fixed joint [ INFO] [1551949536.284219417]: Loading robot model 'ur5e'... [ WARN] [1551949536.284248174]: Skipping virtual joint 'fixed_base' because its child frame 'base_link' does not match the URDF frame 'world' [ INFO] [1551949536.284258547]: No root/virtual joint specified in SRDF. Assuming fixed joint [ INFO] [1551949536.361852337]: Starting scene monitor [ INFO] [1551949536.365366772]: Listening to '/move_group/monitored_planning_scene' [ INFO] [1551949536.810248478]: Constructing new MoveGroup connection for group 'manipulator' in namespace '' [ WARN] [1551949537.619982832]: Deprecation warning: Trajectory execution service is deprecated (was replaced by an action). Replace 'MoveGroupExecuteService' with 'MoveGroupExecuteTrajectoryAction' in move_group.launch [ INFO] [1551949537.631355072]: Ready to take commands for planning group manipulator. [ INFO] [1551949537.631549401]: Looking around: no [ INFO] [1551949537.631647323]: Replanning: no [ WARN] [1551949537.651003025]: Interactive marker 'EE:goal_ee_link' contains unnormalized quaternions. This warning will only be output once but may be true for others; enable DEBUG messages for ros.rviz.quaternions to see more details. [ INFO] [1551949542.332149068]: Stopping scene monitor [ INFO] [1551949542.334702244]: Starting scene monitor [ INFO] [1551949542.344621091]: Listening to '/planning_scene' I get few warning after I run the command. I try to plan and execute within the RVIZ but the robotics arm is not moving. May I know how to solve it? Hi, are you launching the upload before the bringup? If so, you shouldn't. Try following the Usage >>roslaunch ur_description ur5_upload.launch >>roslaunch ur_modern_driver ur5e_bringup.launch robot_ip:=192.168.1.150 Also you are calling ur5 launches, it should be ur5_e_moveit_config roslaunch ur5_moveit_config moveit_rviz.launch config:=true You need the latest version of https://github.com/ros-industrial/universal_robot branch Kinetic devel to have the e series packages. I hope it helps. Hi @martimorta-wood , I follow your solution and all things are good although there is warns after I run the last command. Here is the commands I run in different terminal: roslaunch ur_modern_driver ur5e_bringup.launch robot_ip:=192.168.1.150 roslaunch ur5_e_moveit_config ur5_e_moveit_planning_execution.launch roslaunch ur5_e_moveit_config moveit_rviz.launch config:=true Now the problem is after I plan and execute, It show it Uploading trajectory program to robot but no further action has been executed to the robot. Is there any wrong with my setup? How can I solve it? @ChunJyeBehBeh Make sure your controller has Remote Control enabled. It is in the settings menu. Here is a link to a tutorial I wrote about it if you need an explanation: http://www.clearpathrobotics.com/assets/guides/ur/ur_e_setup/controller.html @dniewinski I have enabled the remote control already but it stuck at "Uploading trajectory program to robot" I have connect the ur5e to a router and at the same time I connect my laptop to that router with a Ethernet cable. In rviz moveit try reducing speed and acceleration scale to something like 0.05 and see if it works. @ChunJyeBehBeh In your image, you have enabled the display of the selection, but you haven't set the robot to remote control. In the menu you have pressed, you need to then select Remote Control. The image on the top bar will change from the pendant to the 2 boxes connected with a line. Once this is done, restart the ROS nodes and try again @ChunJyeBehBeh I just solved this problem by opening port 50001 on my computer. It seems that the Polyscope attempts to connect on this port, not sure if this is possible to detect/throw as an error message through the package? I'd like to start moving this forward again, but I think there are some issues to deal with before being able to merge. First, there's a couple simple inconsistencies between the CB and e-series launchfiles: When you last rebased, @dniewinski, the base of your branch went from before to after #236, where some new arguments were added to the ros_control launchfiles Similar to the above, shortly after your branch diverges from it, #254 was merged into kinetic-devel, which also adds arguments to all launchfiles Could you please add the same argument to the e-series launchfiles? Then there's the bigger issue of the ros_control velocity interface not working properly on the e-series. I've had the chance to work on this a little bit and I think I managed to implement the solution I suggested in this comment. You can see the changes I based on top of this branch in the ur-e branch in my fork. So far I've only been able to run a few tests on a UR5e, which look promising, but I'd like to test I didn't break the functionality for the CB series as well. I can check with a UR5 running 3.5.something, but it would be great if other people can test with other robots and PolyScope versions. At this point I'm unsure how to proceed. We can either merge this PR with velocity interface not working and let me rebase my changes and open a new PR, or fast-forward this PR to include my changes and review everything as a whole. Please keep in mind that my changes have not been thoroughly tested and may break existing functionality. What do you think, @dniewinski, @gavanderhoorn, @ipa-nhg? @miguelprada I will make the changes to the launch files for the missing arguments. As far as the velocity interface issue, I think it would be more complete/proper to include your fixes, but that also opens up the possibility of this merge taking much longer. @miguelprada I will make the changes to the launch files for the missing arguments. Thanks! There were some tabs instead of spaces there, but I took the liberty to fix that. As far as the velocity interface issue, I think it would be more complete/proper to include your fixes, but that also opens up the possibility of this merge taking much longer. Indeed. But merging this with a not functional ros_control velocity interface is not ideal either. An intermediate option is to merge it adding some kind of warning/error to the user that the mode is not supported. This would allow to have some support for e-series right now and leave the other set of changes for a separate discussion. I'd really like the input of the other maintainers, @gavanderhoorn, @ipa-nhg? I don't have any problem merging this if the node gives an error for the velocity interface. Although I think it is a big issue, but I understand that solve it is hard an will delay the merge. A third option that we have is create a new branch for the E-Series, not ideal but at least using a branch for the upstream repository (and linking the branch on the readme of master branch) we have a common branch to submit PRs. Is there any update on this? I can confirm that @dniewinski 's fork works with UR3e on Ubuntu 16.04 and Kinetic. The UR software version is 5.3.X.X. I had the error mentioned in this issue with current kinetic-devel branch. I cherry-picked #269 and fixed a small formatting issue. This should (hopefully) allow the CI checks to pass. I pushed a commit to disable-velocity-iface branch in my fork with the proposal to temporarily disable the ros_control velocity interface for the URe series. Being a single commit, I think it should be easily reverted. @gavanderhoorn, @ipa-nhg, would you rather keep this open and try to merge only when the full functionality is there or should we push that commit to this branch and merge? I'm not sold on creating yet another branch for the e-series. @miguelprada I vote for merge, adding some lines to the Readme about the limitation of the support for URe @miguelprada wrote: would you rather keep this open and try to merge only when the full functionality is there or should we push that commit to this branch and merge? I'm not sold on creating yet another branch for the e-series. I'm not a fan of the duplicated launch files (just as I wasn't too happy with all the duplicated packages in universal_robot), but it makes sense to merge this first and do cleanup later. re: ros_control problem: I suggest we remove the ros_control related launch file(s) for the e-series for now, that should give a (first) indication that ros_control is currently not supported with e-series controllers. As @ipa-nhg suggests, we also clearly document this. That would be a second indication that ros_control is not supported at the moment. If/when fixed, we'll update the documentation and add the appropriate launch file. I'd like to keep history in tact for this one (so no squash-merge), but am unsure about the last two commits by @miguelprada. I'm not a fan of the duplicated launch files (just as I wasn't too happy with all the duplicated packages in universal_robot), but it makes sense to merge this first and do cleanup later. A minimum of 6 separate launch files for each model (3xCB and 3xe) are needed in order to load the proper URDF models. However, the number of launch files per model could be reduced at the cost of adding one (or more?) extra argument(s). This can be dealt with later, though. re: ros_control problem: I suggest we remove the ros_control related launch file(s) for the e-series for now, that should give a (first) indication that ros_control is currently not supported with e-series controllers. As @ipa-nhg suggests, we also clearly document this. That would be a second indication that ros_control is not supported at the moment. If/when fixed, we'll update the documentation and add the appropriate launch file. Not a big fan of removing the launch files since: position interface seems to work just fine one can still try to use the velocity interface, either through the CB launch files or using custom ones I understand that the hack I proposed above is far from ideal, but it would be my preferred choice. I'd like to keep history in tact for this one (so no squash-merge), but am unsure about the last two commits by @miguelprada. Note sure what can be done about these. They are required for the travis checks to pass, but I won't complain if they are removed and the PR is force-merged. One last thing. I've just found out that with the default ros_control controller configuration the driver is not publishing data from the e-series as fast as it can (e.g. see this). I'm not sure this is a big deal, since controllers will still update at 500Hz, but it did cause me to waste some time analyzing incomplete data while trying to characterize the FT sensor signals. Hi, I want to control the ur5e robot by first trying the test_move.py script from the universal_robot package. I am not sure how to clone the ur_moder_driver and universal_robot fork, should I clone the ur_e branch or the kinetic(-devel)? I am using the ROS kinetic distro. Then, if I "git clone .." the two forks from dniewinski (and build & source), should I be able to run : rosrun ur_modern_driver test_move.py, or are there more changes to be made? @IndustrialEngStudent first of all, please do not post general how do I... type questions on pull request discussions such as this. These type of questions are better posted in ROS Answers. You should only need to clone the dniewinski/kinetic-devel branch into your workspace. Installing universal_robot from the binaries already includes the URe models and is the recommended way. @miguelprada will do, thank you Hi, I am not sure if this should be said here, (the fork doesn't have issues), but here you go. Trying the low_bandwidth_trajectory_follower I found out it didn't work in a UR10e because the firmware version is 5.x and not 3.x. I have got over this issue and the driver worked well changing an equal to to a bigger than in factory.h line 76. as shown below: bool isVersion3() { return major_version_ >= 3; } If you think this won't cause problems somewhere else I am happy to send a pull request to @dniewinski 's fork. My last problem has been solved, it's just about CPU performance. Ros_control can run very well with intel i5 8400, not with AMD Ryzen 5 2400 or intel i5 6500, so next time when you need to have real-time control on the ur5e, you should have a better CPU. There is an another question, I find when I use ros_control, I cann't use /ur_driver/URScript topic to control io. Just as below. Is there any solution? As we've officially deprecated ur_modern_driver, I'm closing this PR. Please refer to the announcement on ROS Discourse. @dniewinski: a huge thanks for your original work on getting ur_modern_driver compatible with e-series controllers :+1: . It has been used by quite a few users here, while we were waiting on the official release of ur_robot_driver. To all who have commented here: I would strongly recommend you migrate to ur_robot_driver, as it is officially supported by UR, includes support for all CB3 and e-series robots with up-to-date Polyscope versions and provides an enhanced user experience when configuring the controller to work with ROS.
gharchive/pull-request
2018-10-17T16:26:52
2025-04-01T06:45:39.613907
{ "authors": [ "AustinDeric", "ChunJyeBehBeh", "IndustrialEngStudent", "Joinyong", "brianjohnzhang", "brianzhang-git", "dniewinski", "gavanderhoorn", "geg58", "happygaoxiao", "ipa-nhg", "martimorta-wood", "miguelprada", "tkelestemur", "yangbenbo" ], "repo": "ros-industrial/ur_modern_driver", "url": "https://github.com/ros-industrial/ur_modern_driver/pull/216", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
769017902
[tool] load parameters before initializing Description With this PR the tool parameters will be available when onInitialize is called. One can use this feature to solve #1138 by adding an rviz::StringProperty, e.g. named Tool Name, and then use its value to rename the tool in onInitialize. I'm not sure about the purpose of this PR yet. Isn't it common practice to first initialize an object (display, panel, tool, ...) and later load its parameters? I don't like the idea of coupling the loading to the creation of a tool. Regarding #1138: I guess you can always call setName() on your tool, can't you? Isn't it common practice to first initialize an object (display, panel, tool, ...) and later load its parameters? Hmm.. I'm not quite sure. After all, the initialization might depend on the parameters. What would be the downside of having the parameters available in onInitialize? Regarding #1138: I guess you can always call setName() on your tool, can't you? Calling setName has no effect after onInitialize. At least it has no effect when called in activate. Besides, activate takes place only when the tool is clicked, so even if it worked there the name would only change to the correct one once the tool had been clicked. After all, the initialization might depend on the parameters. It shouldn't. It's always possible to load a different config later. However, the initialization should always work. Calling setName has no effect after onInitialize. I don't see why this should be the case. Currently, setName() is called just shortly before initialize(): https://github.com/ros-visualization/rviz/blob/13d061d973e96c6315c7768a7f316d2cc501299d/src%2Frviz%2Ftool_manager.cpp#L235-L237 It shouldn't. It's always possible to load a different config later. However, the initialization should always work. I see... I did some more digging in the code and the issue is the following with setName and setIcon: they have no effect on the menu entries because they are created only once per tool in https://github.com/ros-visualization/rviz/blob/2fe6d33f1ca7ceccd916cea94c199ac8a2f25ca5/src/rviz/visualization_frame.cpp#L1177-L1188 From what you explained about the initialization, I believe that the proper solution to #1138 is to change the setName and setIcon methods so that they force a reload of the action and update the member variables action_to_tool_map_, tool_to_action_map_, toolbar_, and remove_tool_menu_ from VisualizationFrame. After doing so, calling setName or setIcon from an overloaded load method should do the trick. What do you think? That sounds reasonable. Would it be sufficient to change the name? I think, having the same icons for the same tool, would facilitate its identification. I suggest introducing a signal nameChanged(), such that the VisFrame is informed about name changes. Completely agree. I'll do it then. I've force-pushed the changes. Calling setName now updates the toolbar and this PR solves #1138. Great suggestions! I've force-pushed again to meet them. Two more nitpicks. Otherwise looks good. Thanks. That's completely understandable :+1:. I've pushed the changes. Thanks a lot.
gharchive/pull-request
2020-12-16T15:46:15
2025-04-01T06:45:39.692575
{ "authors": [ "jcmonteiro", "rhaschke" ], "repo": "ros-visualization/rviz", "url": "https://github.com/ros-visualization/rviz/pull/1570", "license": "bsd-3-clause", "license_type": "permissive", "license_source": "bigquery" }
710576868
set CATKIN_PACKAGE_LIBEXEC_DESTINATION which was documented but not set Fixes #1118. Cherry-picked to kinetic-devel in 457a7cbe1885214905a79cdb11ea7d3b9c9e6d56.
gharchive/pull-request
2020-09-28T21:03:58
2025-04-01T06:45:39.693900
{ "authors": [ "dirk-thomas" ], "repo": "ros/catkin", "url": "https://github.com/ros/catkin/pull/1122", "license": "bsd-3-clause", "license_type": "permissive", "license_source": "bigquery" }
149230487
ros-gbp organization permission audit As stated in the email to ros-users, please post in this thread with the permissions you would like to retain for the ros-gbp Github organization before May 2nd. @ros-gbp/owners continued in https://github.com/ros-gbp/metapackages-release/issues/1
gharchive/issue
2016-04-18T18:18:56
2025-04-01T06:45:39.718014
{ "authors": [ "jacquelinekay" ], "repo": "ros/rosdistro", "url": "https://github.com/ros/rosdistro/issues/11151", "license": "BSD-3-Clause", "license_type": "permissive", "license_source": "github-api" }
427240488
rospilot: 1.5.5-0 in 'melodic/distribution.yaml' [bloom] Increasing version of package(s) in repository rospilot to 1.5.5-0: upstream repository: https://github.com/rospilot/rospilot.git release repository: https://github.com/rospilot/rospilot-release.git distro file: melodic/distribution.yaml bloom version: 0.7.2 previous version for package: 1.5.4-0 rospilot * Fix web_ui.py serving of nodejs dependencies * Upgrade Bootstrap to fix CVEs * Contributors: Christopher Berner Closing, as I'm submitting 1.5.6 instead
gharchive/pull-request
2019-03-30T04:18:48
2025-04-01T06:45:39.721194
{ "authors": [ "cberner" ], "repo": "ros/rosdistro", "url": "https://github.com/ros/rosdistro/pull/20760", "license": "BSD-3-Clause", "license_type": "permissive", "license_source": "github-api" }
583660483
cob_calibration_data: 0.6.14-1 in 'kinetic/distribution.yaml' [bloom] Increasing version of package(s) in repository cob_calibration_data to 0.6.14-1: upstream repository: https://github.com/ipa320/cob_calibration_data.git release repository: https://github.com/ipa320/cob_calibration_data-release.git distro file: kinetic/distribution.yaml bloom version: 0.9.3 previous version for package: 0.6.13-1 cob_calibration_data * Merge pull request #164 <https://github.com/ipa320/cob_calibration_data/issues/164> from fmessmer/remove_cob4-22 remove cob4-22 * remove cob4-22 * Merge pull request #163 <https://github.com/ipa320/cob_calibration_data/issues/163> from HannesBachter/add_cob4-23 add cob4-23 * add cob4-23 * Merge pull request #162 <https://github.com/ipa320/cob_calibration_data/issues/162> from fmessmer/feature/python3_compatibility [ci_updates] pylint + Python3 compatibility * activate pylint checks from feature branch * Merge pull request #160 <https://github.com/ipa320/cob_calibration_data/issues/160> from fmessmer/rosenv_after_script use rosenv for AFTER_SCRIPT * use rosenv for AFTER_SCRIPT * Merge pull request #159 <https://github.com/ipa320/cob_calibration_data/issues/159> from fmessmer/ci_updates [travis] ci updates * sort travis.yml * rosinstall consistency * add CATKIN_LINT=pedantic * update travis.yml * catkin_lint fixes * Merge pull request #158 <https://github.com/ipa320/cob_calibration_data/issues/158> from HannesBachter/update_cob4-16 calibrate head cam * calibrate head cam * Contributors: Felix Messmer, fmessmer, hyb Holding for sync
gharchive/pull-request
2020-03-18T11:27:51
2025-04-01T06:45:39.725213
{ "authors": [ "fmessmer", "tfoote" ], "repo": "ros/rosdistro", "url": "https://github.com/ros/rosdistro/pull/24140", "license": "BSD-3-Clause", "license_type": "permissive", "license_source": "github-api" }
1639957835
Fix hardcoded rmw_gid_t length This fixes the following build failure on rolling: error[E0308]: mismatched types --> src/subscription/message_info.rs:120:19 | 120 | data: rmw_message_info.publisher_gid.data, | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ expected an array with a fixed size of 24 elements, found one with 16 elements Thanks @esteve! @nnmm thank you for fixing this bug!
gharchive/pull-request
2023-03-24T19:55:20
2025-04-01T06:45:39.728571
{ "authors": [ "esteve", "nnmm" ], "repo": "ros2-rust/ros2_rust", "url": "https://github.com/ros2-rust/ros2_rust/pull/309", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2702315827
Remove CODEOWNERS and mirror-rolling-to-master workflow. They are both outdated and both no longer serving their intended purpose. https://github.com/Mergifyio backport jazzy humble
gharchive/pull-request
2024-11-28T14:23:02
2025-04-01T06:45:39.741585
{ "authors": [ "ahcorde", "clalancette" ], "repo": "ros2/pybind11_vendor", "url": "https://github.com/ros2/pybind11_vendor/pull/29", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
137340447
Small syntax correction I went a little crazy shrinking the diff in the rmw_connext_dynamic pull request and broke the build. This fixes it. +1
gharchive/pull-request
2016-02-29T19:02:50
2025-04-01T06:45:39.742476
{ "authors": [ "esteve", "jacquelinekay" ], "repo": "ros2/rmw_connext", "url": "https://github.com/ros2/rmw_connext/pull/138", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
1329196657
Update include directory install location, and other updates My goal was to update the header locations to match the new recommendations for overriding packages, but I got a little carried away. There seemed to be a little bit of incorrect information, but most of these changes are me removing unnecessary information. I made updates after reviewing about up to the Windows section. I'd recommend this for backport to Humble. Given the work we've done in #3812 , our documentation is now up-to-date with our implementation. There are definitely further improvements listed in here, and that we'd like to make to the core, but I think we should update the documentation as we update our best practices. So with that said, I'm going to close this out as no longer relevant. @sloretz If you disagree, please do feel free to reopen. We'll then need to rebase this and figure out which parts we want to move forward with.
gharchive/pull-request
2022-08-04T21:33:42
2025-04-01T06:45:39.744546
{ "authors": [ "clalancette", "sloretz" ], "repo": "ros2/ros2_documentation", "url": "https://github.com/ros2/ros2_documentation/pull/2915", "license": "CC-BY-4.0", "license_type": "permissive", "license_source": "github-api" }
715856092
Retry failed sftp uploads Fixes #122 I managed to reproduce Mateusz's error by letting it run over a directory with lots of existing files and a high number of threads. Tenacity seems to be working nicely.
gharchive/pull-request
2020-10-06T16:53:04
2025-04-01T06:45:39.802764
{ "authors": [ "tschoonj" ], "repo": "rosalindfranklininstitute/rfi-file-monitor", "url": "https://github.com/rosalindfranklininstitute/rfi-file-monitor/pull/123", "license": "BSD-3-Clause", "license_type": "permissive", "license_source": "github-api" }
636787323
Buiding apk throws error hi all, i am using flutter_facebook_login plugin for login with facebook. everything is working fine but when m going to build my apk it shows some error. but apk is building in app folder. What went wrong: Execution failed for task ':flutter_facebook_login:verifyReleaseResources'. 1 exception was raised by workers: com.android.builder.internal.aapt.v2.Aapt2Exception: Android resource linking failed C:\Users\Dell\AndroidStudioProjects\artistry\build\flutter_facebook_login\intermediates\res\merged\release\values-v28\values-v28.xml:7: error: resource android:attr/dialogCornerRadius not found. C:\Users\Dell\AndroidStudioProjects\artistry\build\flutter_facebook_login\intermediates\res\merged\release\values-v28\values-v28.xml:11: error: resource android:attr/dialogCornerRadius not found. C:\Users\Dell\AndroidStudioProjects\artistry\build\flutter_facebook_login\intermediates\res\merged\release\values\values.xml:2817: error: resource android:attr/fontVariationSettings not found. C:\Users\Dell\AndroidStudioProjects\artistry\build\flutter_facebook_login\intermediates\res\merged\release\values\values.xml:2818: error: resource android:attr/ttcIndex not found. error: failed linking references. https://programmingproalpha.blogspot.com/2020/08/how-to-make-facebook-log-in.html Hope it helps
gharchive/issue
2020-06-11T07:29:43
2025-04-01T06:45:39.842105
{ "authors": [ "MohammadUzair1", "swarankargaurav1" ], "repo": "roughike/flutter_facebook_login", "url": "https://github.com/roughike/flutter_facebook_login/issues/276", "license": "BSD-2-Clause", "license_type": "permissive", "license_source": "github-api" }
1740918549
🛑 Fossil based IRC Log is down In aa8292b, Fossil based IRC Log (https://rouilj.dynamic-dns.net/fossil/roundup_irc_logs/doc/trunk/log/roundup/) was down: HTTP code: 503 Response time: 462 ms Resolved: Fossil based IRC Log is back up in 3d9dee7.
gharchive/issue
2023-06-05T04:12:18
2025-04-01T06:45:39.844781
{ "authors": [ "rouilj" ], "repo": "rouilj/RoundupAssets", "url": "https://github.com/rouilj/RoundupAssets/issues/476", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
214883678
Custom to Run Composite I'm wondering if it's possible to run the composite command? (as specified here: http://www.imagemagick.org/Usage/layers/) with this wrapper. No, this library does not support the composite command currently. You could try the Composite Operator of Convert with Mogrify.create/2 which calls convert. (Unlike Mogrify.save/2 which calls mogrify.) @talklittle would this work if I want to create a new image which is a combination of a few other images? @shawnbro It sounds like it would work. But I haven't tried it before. If you do try it out, please report back on whether it worked for you. You can probably do image = Mogrify.custom(image, "composite") to add the -composite operator to the list of operators. Might have to do some extra fussing around with it though, because from ImageMagick documentation it looks like ImageMagick uses slightly different syntax to do convert -composite commands. I ended up doing this: image_operator(image, "convert -size: wid #{image_path} \ #{image_to_combine_path} -geometry +0+10 -composite") |> create(in_place: true) which merges image_to_combine_path image into image_path image.
gharchive/issue
2017-03-17T01:25:23
2025-04-01T06:45:39.853387
{ "authors": [ "shawnbro", "talklittle" ], "repo": "route/mogrify", "url": "https://github.com/route/mogrify/issues/39", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
1715820945
npx rowscript error I'm starting my RowScript journey by following the 'Getting Started' section of the website. When I try to compile .rows file into .msj file, the following issue occurs: npx rowscript /Users/kirraObj/Desktop/WorkSpace/RowScript/Test/node_modules/rowscript/cli/npm/index.js:16 throw new Error(`Binary not found in node_modules (${os}-${arch})`); ^ Error: Binary not found in node_modules (darwin-arm64) at binPath (/Users/kirraObj/Desktop/WorkSpace/RowScript/Test/node_modules/rowscript/cli/npm/index.js:16:15) at Object.<anonymous> (/Users/kirraObj/Desktop/WorkSpace/RowScript/Test/node_modules/rowscript/cli/npm/index.js:20:24) at Module._compile (node:internal/modules/cjs/loader:1254:14) at Module._extensions..js (node:internal/modules/cjs/loader:1308:10) at Module.load (node:internal/modules/cjs/loader:1117:32) at Module._load (node:internal/modules/cjs/loader:958:12) at Function.executeUserEntryPoint [as runMain] (node:internal/modules/run_main:81:12) at node:internal/main/run_main_module:23:47 Node.js v18.15.0 Operating system is MacOS 13.3.1. Ah yes, it's because I didn't release the Darwin ARM64 version of the compiler, you could try cloning the repo and running some tests in core/src/tests for now. A new Action step to release the ARM64 binaries should be added, which's definitely a low-hanging fruit. @CziSKY You could temporarily make the release.yml run on on.pull_request for your PR to test the releases, and rollback to on.tags after everything looks good. I will publish a new version v1.0.0-alpha.4 for you.
gharchive/issue
2023-05-18T15:19:24
2025-04-01T06:45:39.870467
{ "authors": [ "CziSKY", "anqurvanillapy" ], "repo": "rowscript/rowscript", "url": "https://github.com/rowscript/rowscript/issues/84", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1691693513
🛑 Silico Rubber Polymers is down In f8b185b, Silico Rubber Polymers (https://silicorubberpolymers.com/) was down: HTTP code: 504 Response time: 5932 ms Resolved: Silico Rubber Polymers is back up in 3fe534a.
gharchive/issue
2023-05-02T02:48:04
2025-04-01T06:45:39.886415
{ "authors": [ "rpharaniya" ], "repo": "rpharaniya/websites-uptime-monitor", "url": "https://github.com/rpharaniya/websites-uptime-monitor/issues/876", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1692062946
🛑 Digital Global Hub is down In 49ce22c, Digital Global Hub (https://digitalglhub.com/) was down: HTTP code: 504 Response time: 6387 ms Resolved: Digital Global Hub is back up in 9ddccb0.
gharchive/issue
2023-05-02T09:12:53
2025-04-01T06:45:39.888895
{ "authors": [ "rpharaniya" ], "repo": "rpharaniya/websites-uptime-monitor", "url": "https://github.com/rpharaniya/websites-uptime-monitor/issues/907", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2630831044
ModulNotFound Error I wanted to install the module on my Raspberry Zero in a virtual environment. It works fine with pip, the module is found in python. Pip installs it in the correct environment. But as soon as I run the strandtest.py, I get the error ModulNotFound. Without virtual environment with compiling the c-files by myself it works... The little cogs in my brain just unstuck; are you running with sudo? ok my fault, when i start the programm with sudo i leave the virtual environment, so he can't find it, if the modul is only install in the environment, therefore i have to install it on the global python-path. And we need sudo to get access to the pins, right? I can't start it, without sudo. Use sudo --preserve-env PATH python example.py and it should work. I've published a new version of the package, too, which fixes some bugs and documents this caveat. Yeah thank you, that works for me.
gharchive/issue
2024-11-02T22:23:16
2025-04-01T06:45:39.891861
{ "authors": [ "Gadgetoid", "JThielenSchool" ], "repo": "rpi-ws281x/rpi-ws281x-python", "url": "https://github.com/rpi-ws281x/rpi-ws281x-python/issues/116", "license": "BSD-2-Clause", "license_type": "permissive", "license_source": "github-api" }
1867350475
Add load/save API Create the types and APIs for loading and saving programs. This should also support copying programs and pasting them. Added in 0.1.3
gharchive/issue
2023-08-25T16:30:16
2025-04-01T06:45:40.012656
{ "authors": [ "rracariu" ], "repo": "rracariu/logic-mesh", "url": "https://github.com/rracariu/logic-mesh/issues/1", "license": "BSD-3-Clause", "license_type": "permissive", "license_source": "github-api" }
540073413
Display 24-hour time with discontinuousTimeScaleProvider I'd like to have the chart display time in 24-hour format rather than display AM/PM. Hopefully this is simple but I have not been able to figure it out. Is a custom tickFormat function required? I've tried a variety of d3.timeFormat functions but it just always prints the same value. <XAxis axisAt="bottom" orient="bottom" tickFomat={//what here} /> From issue #675 there is the suggestion to add something like this. <XAxis axisAt="bottom" orient="bottom" tickFormat={d=> timeFormat('%H:%M')(data[d].date)} /> This does display in 24 hour time. The default shows the day of week when there is a change while this does not. Likely a more complicated function can check for change of day.
gharchive/issue
2019-12-19T04:42:42
2025-04-01T06:45:40.015023
{ "authors": [ "miamiblue2" ], "repo": "rrag/react-stockcharts", "url": "https://github.com/rrag/react-stockcharts/issues/743", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
2537969801
False positive for nested ternary Describe the bug If the statement contains multiple question marks, that is, for "?." and ternary, it is marked as nested ternary (see example below). To Reproduce Steps to reproduce the behavior: What is the exact code you are analyzing (this is line 55)? const countDatasets = this.structure?.volume ? this.structure.volume.length : 0; What is the output you are getting? - rrd ~ nested Ternary src\electron\nodes\DrawOrthoslice.ts 👉 Break the nested ternary into standalone ternaries, if statements, && operators, or a dedicated function. See: https://vue-mess-detector.webmania.cc/rules/rrd/nested-ternary.html line #55 has nested ternary 🚨 Expected behavior Ignore "?." constructs. Screenshots If applicable, add screenshots to help explain your problem. Used version number of vue-mess-detector: 0.6.0 Used version number of node & yarn: v22.5.1 and npm, not yarn 10.8.3 Additional context Add any other context about the problem here. Thanks for using our tool 🙏🏻 This will probably get fixed by rrd when is on again, if not tomorrow morning I can check it out 🙌🏻 @crystalfp do you want to send a PR for this? Thanks for reporting 🌟 I think I solved this Its fixed in latest commit https://github.com/rrd108/vue-mess-detector/commit/1e8133bed69eb543e31b6723ea794ba7b26511a4 You can try it out thanks to pkg.pr.new npm i https://pkg.pr.new/rrd108/vue-mess-detector@1e8133b @ @rrd108 is this released already? I wonder if the issue should be closed after its released or when the fix is done? I close them when the fix is commited. And I add a new comment when the fix is released - if I do not forget it implemented in0.54.0
gharchive/issue
2024-09-20T06:15:44
2025-04-01T06:45:40.023276
{ "authors": [ "David-Pena", "crystalfp", "rrd108" ], "repo": "rrd108/vue-mess-detector", "url": "https://github.com/rrd108/vue-mess-detector/issues/298", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1147756368
autoDispose even if there is a listener next screen Describe the bug I'm setting the counterState = 10 and next page the counterState = 0 Due to autoDispose it cause the stateProvider dispose. Even tho you switch page directly after you set the state. Which seems weird if you ask me, since we listen to the stateProvider at next screen, so why dispose it even before checking if there is still a listener. To Reproduce final counterProvider = StateProvider.autoDispose<int>((_) => 0); class Screen1 extends ConsumerWidget { const Screen1({Key? key}) : super(key: key); @override Widget build(BuildContext context, WidgetRef ref) { return Center( child: ElevatedButton( child: const Text("Press me"), onPressed: () { ref.read(counterProvider.state).state = 10; Navigator.push(context, MaterialPageRoute(builder: (context) => const Screen2())); }, ), ); } } class Screen2 extends ConsumerWidget { const Screen2({Key? key}) : super(key: key); @override Widget build(BuildContext context, WidgetRef ref) { final counter = ref.watch(counterProvider); return Center( child: Text(counter.toString()), ); } } Expected behavior Counter should be 10 and not dispoed so Quick when there is still a listener on the other screen What happens if the route does not load in e.g 5 seconds and if you fetch data from website and you would like it to dispose immediately after it's unused. But set it before you switch route, what can you do then? I do not understand the case you're referring to. Can you make an example? But I'd suggest trying cacheTime. You should be able to see for yourself if it solves your issue We had the discussion about this on Discord, cacheTime solve the issue, but you told me it was not meant for this and you had something else on mind to fix this. This is a simple example, but there is cases where i would like to fetch things from an API on one screen through initState and pass the value to the stateProvider then switch screen immediately after to the new screen with that value being used. But autoDispose clear the value, even tho screen2 is listening for the provider We had the discussion about this on Discord, cacheTime solve the issue, but you told me it was not meant for this and you had something else on mind to fix this. The scenario we discussed before was different. This one is solved by cacheTime. Ah okay, but i opened issue, just to have it as a reminder for you. What should i write, so i can remind you about this situationen we talked about in discord There's no need for a reminder. It's part of my 2.0.0 todo-list, so it'll come. Oh okay, sorry for wasting your time here. I really appreciate your work
gharchive/issue
2022-02-23T07:50:33
2025-04-01T06:45:40.045440
{ "authors": [ "robpot95", "rrousselGit" ], "repo": "rrousselGit/river_pod", "url": "https://github.com/rrousselGit/river_pod/issues/1228", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
1170737245
StateNotifier<AsyncValue?> throw type cast error Describe the bug StateNotifier<AsyncValue?> throw type cast error when setState with AsyncValue Version Platform: iOS Flutter version: 2.10.3 hooks_riverpod: ^2.0.0-dev.4 To Reproduce void main() { runApp(const ProviderScope(child: App())); } class App extends StatelessWidget { const App({Key? key}) : super(key: key); @override Widget build(BuildContext context) { return const MaterialApp( home: HomePage(), ); } } final lazyTextProvider = StateNotifierProvider<LazyTextNotifier, AsyncValue<String>?>((ref) { return LazyTextNotifier(); }); class LazyTextNotifier extends StateNotifier<AsyncValue<String>?> { LazyTextNotifier() : super(null); Future<void> getText() async { state ??= const AsyncValue.loading(); // throw exception state = await AsyncValue.guard( () => Future.delayed(const Duration(seconds: 2)).then((_) => 'hello'), ); } } class HomePage extends ConsumerWidget { const HomePage({Key? key}) : super(key: key); @override Widget build(BuildContext context, WidgetRef ref) { final result = ref.watch(lazyTextProvider); return Scaffold( appBar: AppBar(title: const Text('Home')), body: Center( child: result?.when( data: (data) => Text(data), error: (error, _) => Text('$error'), loading: () => const CircularProgressIndicator(), ) ?? ElevatedButton( onPressed: () => ref.read(lazyTextProvider.notifier).getText(), child: const Text('Get Text'), ), ), ); } } Exception [VERBOSE-2:ui_dart_state.cc(209)] Unhandled Exception: type 'Null' is not a subtype of type 'AsyncValue<dynamic>' in type cast #0 ProviderElementBase.setState package:riverpod/…/framework/provider_base.dart:298 #1 StateNotifierProvider.create.listener package:riverpod/…/state_notifier_provider/base.dart:57 #2 StateNotifier.state= package:state_notifier/state_notifier.dart:225 #3 LazyTextNotifier.getText package:test_app/main.dart:28 #4 HomePage.build.<anonymous closure> package:test_app/main.dart:51 #5 _InkResponseState._handleTap package:flutter/…/material/ink_well.dart:989 #6 GestureRecognizer.invokeCallback package:flutter/…/gestures/recognizer.dart:198 #7 TapGestureRecognizer.handleTapUp package:flutter/…/gestures/tap.dart:608 #8 BaseTapGestureRecognizer._checkUp package:flutter/…/gestures/tap.dart:296 #9 BaseTapGestureRecognizer.handlePrimaryPointer (package:flutter[/src/gestures/tap.dart:230]():<…> package:flutter/…/gestures/tap.dart:1 Expected behavior After 2 seconds of clicking the button, the "hello" text appears. Support for this will likely drop with metaprogramming According to this reply, I will close the issue (https://github.com/rrousselGit/river_pod/pull/1293#discussion_r830901411)
gharchive/issue
2022-03-16T09:11:52
2025-04-01T06:45:40.050068
{ "authors": [ "appano1" ], "repo": "rrousselGit/river_pod", "url": "https://github.com/rrousselGit/river_pod/issues/1282", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
2000762851
For AsyncNotifier allow setting its state with several state parts at the same time Is your feature request related to a problem? Please describe. In my current project i use an AsyncNotifier for guiding the user through a 3-step process of creating a new document. The state stores the current step. Each step may fail, where i then set the state to AsyncError. At the same time i would like to set the state's data part to the first step since the user has to restart the process. But apparently this doesn't seem possible, i.e. setting the AsyncError and AsyncData at the same time, leading to just one new state emit. Describe the solution you'd like In its simplest way i would suggest a new factory constructor for AsyncValue: const factory AsyncValue.fromStateParts({ AsyncData? data, AsyncLoading? loading, AsyncError? error, }); which allows setting more than one state at the same time and therefore leading to just one new emit. In my opinion other parts involved (like when function) wouldn't need to be updated since loading would still beat error and data. If a developer would use the new suggested approach he'd know that he probably has to react differently for his watchers/listeners. I'm pretty sure that other combinations (e.g. loading and data) would also make sense to be updated at the same time for certain scenarios. Describe alternatives you've considered Randal Schwartz suggested using AsyncValue.copyWithPrevious or using cascading of state parts, but both didn't work. I also tried writing an extension for AsyncValue but since all constructors are private this didn't work either. If my approach could already be achieved without changing Riverpod please let me know. I'm not sure which problem you're trying to solve exactly. Normally the previous state/error should be fully managed by Riverpod . Maybe you don't want an AsyncNotifier and want to handle AsyncValue on your own I would like to avoid an unnecessary 2nd emit of state update by combining the set state call for the mentioned AsyncError and AsyncData parts. At Discord user TekExplorer summarized it like that: "...he wants to transition from an AsyncData(value: a) to AsyncError(value: b, error) and riverpod has internal transitions that block that, unless he was willing to accept 2 rebuilds, 1 to shift to initial, and another to shift to error." I'm totally fine with AsyncNotifier. he wants to transition from an AsyncData(value: a) to AsyncError(value: b, error) a But why? AsyncNotifier comes with automatic transition. You seem to not want that so you probably shouldn't be using it Again: Just to save one unnecessary new state emit. The transition to error is completely ok, but i would like to have the changed data value as will within that transition. Since AsyncValue is capable of containing the information about multiple state at once i thought applying that multiple state in one setter call should be ok too. Hey @rrousselGit after discussing with @rivella50 about this issue on discord it turns out that something doesn't add up, or at least we're surely confused about copyWithPrevious (or its implicit use when transitioning state). Given that asyncTransition handles transitions for us, and that it's still not enough for @rivella50 (he has to inject handmade state instead), why are the following tests failing? Example: // example error class SomeError implements Exception { const SomeError(); } final myNotifierProvider = AsyncNotifierProvider.autoDispose<MyNotifier, int>(MyNotifier.new); class MyNotifier extends AutoDisposeAsyncNotifier<int> { @override FutureOr<int> build() => 0; Future<AsyncValue<int>> mutate({required bool crashIt}) async { final result = await AsyncValue.guard(() async => crashIt ? throw SomeError() : 1); if (result case AsyncError()) { const handmadeData = AsyncData(2); final newState = handmadeData.copyWithPrevious(result); state = newState; return newState; } state = result; return result; } @override set state(AsyncValue<int> newState) { super.state = newState; } } Tests: void main() { test('testing my notifier', () async { final container = ProviderContainer(); addTearDown(container.dispose); final listener = container.listen(myNotifierProvider, (previous, next) {}); addTearDown(listener.close); final init = listener.read(); expect(init.requireValue, 0); var result = await container.read(myNotifierProvider.notifier).mutate(crashIt: false); final after = listener.read(); expect(result.requireValue, 1); expect(after.requireValue, 1); result = await container.read(myNotifierProvider.notifier).mutate(crashIt: true); final crashed = listener.read(); expect(result.requireValue, 2); expect(result.error, const SomeError()); // this unexpectedly fails! didn't I copied this..? expect(crashed.requireValue, 2); expect(crashed.error, const SomeError()); // this is failing too as a result! }); } A possible fix and an alternative have been discussed here https://github.com/rrousselGit/riverpod/issues/2102 Btw. this is what i tried and we discussed as well: final newValue = AsyncError<AddPlanConfig>(ImageException('my error'), StackTrace.current) .copyWithPrevious(AsyncData(state.value!.copyWith( nextStep: AddPlanStep.init ))); state = newValue; newValue actually contains both the AsyncError and the new AsyncData value, but the problem comes when trying to apply newValue to state. First we encountered 3 different implementations of copyWithPrevious. When looking at the one for AsyncError it seems ok, but when debugging we see that in asyncTransition it does a copyWithPrevious itself where the AsyncData part gets cut away if the state had a previous version. newValue actually contains both the AsyncError and the new AsyncData value, but the problem comes when trying to apply newValue to state. This is expected. You still use .copyWithPrevious. The alternative is to do: state = AsyncError<AddPlanConfig>(ImageException('my error'), StackTrace.current); state = AsyncData(state.value!.copyWith(nextStep: AddPlanStep.init)); As discussed at the mentioned issue, copyWithPrevious is not safe to be used externally. @AhmedLSayed9 But this leads to 2 separate state emits which i wanted to avoid. There's no issue with it. They're called synchronously, consumers will not be notified twice :) Again: Just to save one unnecessary new state emit. No that's a side-effect of the solution you''re trying to implement. I'm asking for the root of the problem. What are you trying to implement, regardless of how many rebuilds are involved. Well, the solution i'm trying to implement (which doesn't seem to be safe at all) doesn't work. Therefore there isn't any side effect happening ;-) The root of the problem is the same as what @AhmedLSayed9 also tried to figure out and implement: Combining several parts of AsyncValue into a new version of it and then apply it in one step to the AsyncNotifier's state, which then should lead to only one state update emit. But it seems that this is not intended to be possible - above all not by using the existing copyWithPrevious methods. Combining several parts of AsyncValue into a new version of it and then apply it in one step to the AsyncNotifier's state, which then should lead to only one state update emit. Could you be more specific? I'm not sure what that means. This is the code i tried: https://github.com/rrousselGit/riverpod/issues/3133#issuecomment-1817890464 If copyWithPrevious is not supposed to be used couldn't that be documented (because i'm sure that other devs also will try to use that method and encounter similar problems)? couldn't there be another approach which allows devs to do what @AhmedLSayed9 and i try to do (i'll mention again my naive approach of the factory constructor here)? This is the code i tried: #3133 (comment) Again, you're talking about what you're trying to implement. I want to know what problem you're trying to solve with this. Cf https://xyproblem.info/ I can only repeat myself: https://github.com/rrousselGit/riverpod/issues/3133#issuecomment-1817809947 In my opinion those two state updates belong together and therefore it would make sense to group them together. It's not a problem that i get an error or crash, it's just a semantical problem of how to look at a certain state update built with two aspects. I can only repeat myself: #3133 (comment) In my opinion those two state updates belong together and therefore it would make sense to group them together. It's not a problem that i get an error or crash, it's just a semantical problem of how to look at a certain state update built with two aspects. Remi wants to know the real use-case you need this feature for. Maybe, there's a better solution for your use-case and then, we don't need implement a new feature 😃 Please read again the first part of my initial message: https://github.com/rrousselGit/riverpod/issues/3133#issue-2000762851 Sorry. Your previous link was redirecting to another comment. It's up to Remi then. imo, copyWithPrevious shouldn't be exposed if it's not working as expected. Therefore, you'll be forced to do: state = AsyncData(state.value!.copyWith(nextStep: AddPlanStep.init)); state = AsyncError<AddPlanConfig>(ImageException('my error'), StackTrace.current); Btw, for your use-case, I think you should somehow restart the process by using ref.invalidate i.e: at the retry button that's used when the process fails. That's all possible, yes. To me it would just be cleaner to reset the data part right along the occurring error instead of letting the data part stay at a non-current value and wait until the user starts the process again. What's the problem with my alternative above? No problem, i've just another point of view for when to reset the state. Please read again the first part of my initial message: #3133 (comment) No as I keep saying, that message isn't the problem. That's the solution you're trying to implement. Don't talk about providers/notifiers here. Take a step back. What UI pattern are you trying to implement? You should be able to formulate your problem in a way that's almost completely independent from Riverpod. For example rather than "I want to recompute a provider and while it's loading show previous data/error", say "I want users to do a pull-to-refresh, but during the refresh I don't want the data/error to disappear from the UI". The former is what you're trying to do. The latter is the actual problem. They can be related, but may not be. I slightly have to disagree here: Since it's AsyncValue's nature to consist of up to 3 state parts at the same time (which is also useable in the UI for various purposes) i only came to the idea of being able to apply more than one of these state parts in the same state setter call in order to only emit one state update to its consumers, if those state parts somehow belong together. I don't know if without Riverpod such a construct actually exists, therefore my feature request targets Riverpod and i cannot formulate it in a more general way. Regarding the actual problem you wanna know: I don't want my UI consumers having to receive more than one state update for AsyncValue state parts which belong together (e.g. an AsyncError and the corresponding data part change which is implicated by that error). Saying that AsyncValue consists about 3 parts at the same time is misleading. AsyncValue is about one state, and optionally gives you access to the previous one. I keep asking to know more about your use-case because you do not seem to respect that rule. You don't want the previous state. You want something else. This is not what AsyncValue is intended to be. As such, without more information about what you're doing, I'm helpless in understanding the problem we're trying to solve. At the moment, the only possible conclusion for this issue to me is to view this feature request as "improper usage of AsyncValue, likely should use a different mechanism". To change my mind I need more information Ok, perhaps that's the misunderstanding we have here: For an AsyncError and AsyncLoading you internally mix it with the previous data part (using copyWithPrevious) so that we have the possibility to use that data together with new error or loading state. What @AhmedLSayed9 and i independently tried is to do the same but with a new data part. Since you already mentioned that copyWithPrevious shouldn't be used from outside Riverpod the only question was if there couldn't be another possibility for us to pass a new data part along an AsyncError or AsyncLoading state update which does the same as copyWithPrevious but uses the new instead of the previous data part for creating the combined AsyncValue object and as a result only emits one state update to its consumers. Together with @AhmedLSayed9 's use cases you were presented one more from me. That's three use cases in sum, i cannot tell you more here, sorry. What @AhmedLSayed9 and i independently tried is to do the same but with a new data part. What I keep asking is why? Why do you want to have the "previous data" not be the actual previous data? Please stop linking the messages you previously wrote in this issue. They clearly aren't enough for me to understand why you want this feature request. Linking them back only adds frustration. I want new information, not a link to what I already read before. To begin with, calling state = twice in a row wouldn't rebuild widget twice. ref.watch triggers only a single widget rebuild, no matter how many times a provider updates within a single frame. All understood. "previous data" may be untouched, i'm not interested in that data at all, i'm just interested in having the possibility to let a new AsyncValue being created which represents state to be AsyncError and at the same time allows me to insert new data. But the more i read from you i guess that's not possible and also not intended. Btw. i use ref.listen here and the handler gets invoked twice. From what you describe, this is not intended yes. @lucavenir import 'package:flutter/material.dart'; import 'package:flutter_riverpod/flutter_riverpod.dart'; void main() { runApp( const ProviderScope(child: MyApp()), ); } class AsyncTodosNotifier extends AsyncNotifier<List<String>> { @override Future<List<String>> build() async { return []; } Future<void> addTodo() async { state = const AsyncData(['todo']); state = const AsyncError('error', StackTrace.empty); } } final asyncTodosProvider = AsyncNotifierProvider<AsyncTodosNotifier, List<String>>(() { return AsyncTodosNotifier(); }); class MyApp extends StatelessWidget { const MyApp({Key? key}) : super(key: key); @override Widget build(BuildContext context) { return MaterialApp( theme: ThemeData( colorSchemeSeed: Colors.blue, useMaterial3: true, ), home: const MyHomePage(), ); } } class MyHomePage extends ConsumerWidget { const MyHomePage({Key? key}) : super(key: key); @override Widget build(BuildContext context, WidgetRef ref) { ref.listen(asyncTodosProvider, (_, next) { print(next); }); return Scaffold( appBar: AppBar( title: const Text('Riverpod example'), ), body: const SizedBox(), floatingActionButton: FloatingActionButton( key: const Key('increment_floatingActionButton'), onPressed: () => ref.read(asyncTodosProvider.notifier).addTodo(), tooltip: 'Increment', child: const Icon(Icons.add), ), ); } } But the side effect in that example is asynchronous, that's the catch I guess You can just ignore that Future<void> & async, I just copied them from the samples. ref.listen is invoked immediately on state=, so if you call it twice, you will get two notifications yes. You can just ignore that Future<void> & async, I just copied them from the samples. Yeah okay I get it, it doesn't matter much as they both gets executed synchronously in there. Thank you!
gharchive/issue
2023-11-19T09:03:18
2025-04-01T06:45:40.086262
{ "authors": [ "AhmedLSayed9", "lucavenir", "rivella50", "rrousselGit" ], "repo": "rrousselGit/riverpod", "url": "https://github.com/rrousselGit/riverpod/issues/3133", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1322210353
Hello,I would like to reproduce your work It is a great honor for me to read your article on Multimodal Attention-based Deep Learning for Alzheimer’s Disease Diagnosis. Your clear and compacted declaration inspired me a lot. However, in notebook general/diagnosis_making.ipynb new = pd.DataFrame.from_dict(d, orient='index').reset_index() ,'d' does not seem to be defined. Meanwhile, I'd like to know where the file all_img_try1_10_31_2021.csv came from, I didn't find it in the ADNI database. Thanks! Thank you for reaching out @daidaidaluannan! I apologize for missing the "d" variable, I have corrected the code to include where the variable came from. As far as the file - it is the metadata file that one can get from downloading MRI images from ADNI. It is not a standard CSV available on ADNI, but rather something that is specific to the exact MRI scans you chose to download. Once you download images, you can also download the metadata associated with those specific images. Hope that helps!
gharchive/issue
2022-07-29T12:47:04
2025-04-01T06:45:40.100789
{ "authors": [ "daidaidaluannan", "michalg04" ], "repo": "rsinghlab/MADDi", "url": "https://github.com/rsinghlab/MADDi/issues/1", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
994880034
Choose network for hardware wallets Allows the user to choose the network when selecting hardware wallets. See detailed commit messages for full details. provider controller To accomplish this the provider controller has been taken from web3modal (src/controllers/providers.ts). The majority of this file has stayed the same, the following things were added or changed: onClick method accepts an optional parameter with RPCURL and ChainId here. This is how the whole thing works. connectTo method also accepts the optional parameter. See here In the future, this file could be used to solve #106, and https://github.com/Web3Modal/web3modal/pull/300 Checklist to test the hardware devices: [x] Ledger (working with RSK app for RSK Mainnet and Ethereum app for RSK Testnet) [ ] Trezor [ ] D'Cent Rebased to develop and changed the 'preConnect' function name to match #159 naming convention. These two PRs do not depend on each other now and each can be rebased quickly into each other depending on which gets merged first.
gharchive/pull-request
2021-09-13T13:16:03
2025-04-01T06:45:40.108231
{ "authors": [ "ilanolkies", "jessgusclark" ], "repo": "rsksmart/rLogin", "url": "https://github.com/rsksmart/rLogin/pull/150", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
428671779
Ffmpeg packages not being built as shared libraries The current ffmpeg build fails due to libaom not being built as a shared library. The error from the Travis build can be seen below This is occurring due to the av1 library being built using cmake rather than using ./configure which the opencvdirectinstall.sh script handles by using sed to enable building a shared object file. This might be problem for other packages- vid_stab and x265 which are also using CMake Building of vid_stab and x265 as shared libraries is addressed here
gharchive/issue
2019-04-03T09:50:23
2025-04-01T06:45:40.113527
{ "authors": [ "rajat2004", "rsnk96" ], "repo": "rsnk96/Ubuntu-Setup-Scripts", "url": "https://github.com/rsnk96/Ubuntu-Setup-Scripts/issues/17", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
192937408
FrameBufferAllocator should be passed to ReactiveSocket in the constructor The allocator allocates extra bytes for frame size. It is needed for framing. But with different transport layer (different DuplexConnection) we don't need the extra bytes so we can use a different FrameBufferAllocator. currently the allocator is a static singleton. it would be good to pass it in the constructor see discussion in https://github.com/ReactiveSocket/reactivesocket-cpp/pull/194 Is this still applicable? Not applicable.
gharchive/issue
2016-12-01T19:20:44
2025-04-01T06:45:40.115428
{ "authors": [ "alexmalyshev", "benjchristensen", "lehecka" ], "repo": "rsocket/rsocket-cpp", "url": "https://github.com/rsocket/rsocket-cpp/issues/196", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
361759029
Remove 1.9.3 from appveyor Ruby 1.9.3 is broken on appveyor, not being able to install from Rubygems, as rubygems itself isn't tested on Ruby 1.9.3, and it's a sanity check that we work on Windows, lets drop it from the matrix. See: rspec/rspec-core#2566 rspec/rspec-mocks#1239 rspec/rspec-expectations#1075 rspec/rspec-support#352 /cc @myronmarston @samphippen LGTM
gharchive/pull-request
2018-09-19T13:50:13
2025-04-01T06:45:40.143888
{ "authors": [ "JonRowe", "myronmarston" ], "repo": "rspec/rspec-dev", "url": "https://github.com/rspec/rspec-dev/pull/207", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
293164344
Prevent "have_http_status" from using deprecated methods in Rails 5.2+ This alternative PR (to #1945) Provides a minimum change to solve the problem of response status methods deprecated in Rails 5.2, as raised by issue #1857 This does so without deprecating or changing any status matchers in RSpec, so a Rails 5.2 upgrade will be transparent, at least with respect to not causing deprecation warnings for the :success, :error, and :missing values of has_http_status. A major or minor release that eliminates the version check will either require Rails 5.2 and map :success to the successful call, or require everyone to change their tests. You'll need to rebase this so you don't bring across a load of 3-7-maintenance commits across to master Thank you for prompting that, Jon. Agreed and done. The commit history looked terrible, but the diff was okay. With your prompting, I rebased, squashed, and force pushed. Now you have one commit to backport if you want it. My view of the migration strategy is: Now: Patch or minor release with the version check With Rails 6 or someday: major release that requires 5.2 (or later), eliminates the version check, and maps the old syntax to the new method. With that, no-one has to port forty tests when they upgrade. (I can only guess the number of extant has_http_status :success tests, but doubt that it's a trivial count.) Jon: Thank you for the further guidance on the change that you would like to see. I'm afraid I went a little further, and made the change forward compatible as well as backward compatible. Allowing the new syntax for any version didn't seem right unless it worked for any version. The two variants failing on Travis are failing in build. Thank you for prompting those additional tests. What could go wrong? It turns out that the type codes came back empty in the messages for those new generic methods. I couldn't resist drying up the tests with a shared_examples block. They were very repetitive. @JonRowe The two failing combinations are gem dependency errors from bundler. If you hate the refactoring, I'll redo it copy-paste. Let me know if you would like any further changes. Thanks for the effort getting this over the line! In my opinion, this calls for a release, since the deprecation warning is REALLY annoying when upgrading rails projects. I would suggest a minor version bump, since it's an added feature for Rails 5.2 and it doesn't break existing behaviour. /cc @samphippen Any news on a release for this? @samphippen Sorry to bump but I would also appreciate a point release including this for our Rails 5.2 upgrade :heart: I would also really appreciate a point release with this fix. We're currently pointing to 3.8.pre via Git SHAs, but it would be much nicer to just pull this from Rubygems.
gharchive/pull-request
2018-01-31T13:49:25
2025-04-01T06:45:40.151399
{ "authors": [ "Aesthetikx", "JonRowe", "LeeXGreen", "jesperronn", "jfi", "wbreeze" ], "repo": "rspec/rspec-rails", "url": "https://github.com/rspec/rspec-rails/pull/1951", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
629488825
Reconsider height of wide thumbnail To match tidyverse.org: https://github.com/tidyverse/tidyverse.org/issues/446 Perfect timing, @dcossyleon can you factor into your CSS upgrades?
gharchive/issue
2020-06-02T20:22:53
2025-04-01T06:45:40.222449
{ "authors": [ "apreshill", "hadley" ], "repo": "rstudio/hugo-tourmaline", "url": "https://github.com/rstudio/hugo-tourmaline/issues/42", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
206265610
Error: Variables must be length 1 or 1. I have an Amazon EMR cluster with RStudio 1.0.136, sparklyr 0.5.3.9000 (installed via devtools::install_github() today, so I believe it's at commit 78bbbf0162d8), and EMR release emr-5.3.0 (including Spark 2.1.0 and R 3.2.2). In a fresh RStudio session, I get the following error when I try to read from an S3 bucket: > library(sparklyr) > library(dplyr) > sc <- spark_connect(master = "local") > cv <- spark_read_csv(sc, path='s3://wl-applied-math-dev/solar-forecasting/pi_data', name='pidata', memory=FALSE, infer_schema=FALSE) Error: Variables must be length 1 or 1. Problem variables: 'database' I also get a dialog box that pops up when the call finishes: The returned value cv is an object of class "tbl_spark" that I can get data out of. It reports dimensions of 22,619,520 x 4 and has the columns I expect. Through interactive debugging in RStudio, it looks like the errors are emitted inside the spark_read_csv -> spark_partition_register_df -> on_connection_updated call. Looks like it's trying to update RStudio's listing of tables maybe? Any chance you can provide an example that reproduces the issue with a local Spark cluster running? I'm fairly sure this is an issue caused by the tibble package; does updating / reinstalling that package make a difference? What's your sessionInfo()? In the above example I used master="local", do you mean something else by "local Spark cluster"? Here's my sessionInfo(): > sessionInfo() R version 3.2.2 (2015-08-14) Platform: x86_64-redhat-linux-gnu (64-bit) Running under: Amazon Linux AMI 2016.09 locale: [1] LC_CTYPE=en_US.UTF-8 LC_NUMERIC=C LC_TIME=en_US.UTF-8 [4] LC_COLLATE=en_US.UTF-8 LC_MONETARY=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 [7] LC_PAPER=en_US.UTF-8 LC_NAME=C LC_ADDRESS=C [10] LC_TELEPHONE=C LC_MEASUREMENT=en_US.UTF-8 LC_IDENTIFICATION=C attached base packages: [1] stats graphics grDevices utils datasets methods base other attached packages: [1] dplyr_0.5.0 sparklyr_0.5.3-9000 loaded via a namespace (and not attached): [1] Rcpp_0.12.9 withr_1.0.2 digest_0.6.12 rprojroot_1.2 assertthat_0.1 mime_0.5 [7] R6_2.2.0 jsonlite_1.2 xtable_1.8-2 DBI_0.5-1 backports_1.0.5 magrittr_1.5 [13] httr_1.2.1 rstudioapi_0.6 config_0.2 tools_3.2.2 shiny_1.0.0 parallel_3.2.2 [19] yaml_2.1.14 httpuv_1.3.3 base64enc_0.1-3 htmltools_0.3.5 tibble_1.2 I'm at version 1.2 of tibble, which is the latest release. Would you recommend a GitHub install? Sorry, I misread that! I'll try your repro case and see if I can figure out what's going on. Looks like I can trigger the same error by doing this: .Call("rs_connectionUpdated", "Spark", "local - sparklyr", "pidata") My "Spark" pane in RStudio shows a "local" connection, but not a "local - sparklyr" connection, so I'm thinking that's the mismatch? @kevinushey this seems to only repro with Spark 2.1.0, the problem seems to be somewhat related with sdf_collect since one of the columns returns a character(0) for the database field while the others return entries, I have not looked into this further than this. I think this is enough to reproduce: sc <- spark_connect(master = "local", version = "2.1.0") copy_to(sc, iris) Okay, I've found the issue -- it's indeed a problem in sparklyr whereby a column containing a single empty string is turned into an empty vector. I'll get this fixed! Should be fixed with https://github.com/rstudio/sparklyr/commit/355810e3e2bdc74cb79ce30835876be007b69e35 -- thanks for reporting! Cool, thanks for the quick turnaround. @javierluraschi Is there a reason that this fix isn't implemented in any of the releases? When I download the releases (0.5.5, 0.5.6, or bugfix/hotfix-0.5.6) and install them manually (or use install.packages("sparklyr") to download sparklyr 0.5.5 from cran directly) the old version of sdf_collect is still being used (I've been manually fixing it so far with assignInNamespace when using one of these versions), but when I pull the current dev version from master the correct version of the function is being used. devtools::install_github("rstudio/sparklyr", ref = "bugfix/hotfix-0.5.6", force = TRUE) library(dplyr) library(sparklyr) sc <- spark_connect(master = "local") mt_tbl <- copy_to(sc, mtcars) # Error: Column `database` must be length 1 or 1, not 0 # Popup error: "R code execution error" spark_disconnect_all() detach("package:sparklyr", unload = TRUE) devtools::install_github("rstudio/sparklyr", ref = "master", force = TRUE) # commit 14a8bb332d24161f67089e6762b9d8d66e102e4c library(dplyr) library(sparklyr) sc <- spark_connect(master = "local") mt_tbl <- copy_to(sc, mtcars) # No error or popup occurs spark_disconnect_all() Note: This error only occurs when using RStudio, not when using the console. I'm assuming because it is trying to update the spark connection view panel. Hello, I am getting the same error by simply using spark_read_csv. I have the latest version of sparklyr coming from Cran. My csvs contain text and yes, they may contain missing values. How can I fix that? I tried to get the current master but I only see a zip file (I am on linux with limited admin rights)
gharchive/issue
2017-02-08T17:22:34
2025-04-01T06:45:40.271131
{ "authors": [ "javierluraschi", "kenahoo", "kevinushey", "randomgambit", "ray-p144" ], "repo": "rstudio/sparklyr", "url": "https://github.com/rstudio/sparklyr/issues/477", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
1396777299
Image slide at cabins page cropped suboptimally Describe the bug Maybe harsh to call this a bug, it's a minor issue. The image slider on indokntnu.no/cabins crops too "short"/shallow, not sure what's the best term, when in Phone, landscape Tablet, portrait To Reproduce Steps to reproduce the behavior: Go to indokntnu.no/cabins on an iPad or iPhone Scroll down to image slider Rotate to landscape mode (iPhone) or portrait mode (iPad) See error Expected behavior The image slider should keep the same height width ratio as it does in all other configurations to properly display the full photo. Screenshots iPhone in landscape mode: iPad in portrait mode: Tablet (please complete the following information): iPad Pro 3rd gen OS: iPadOS 15.6.1 Browser: Safari Smartphone (please complete the following information): Device: iPhone X OS: 16.0 Browser: Safari Additional context Add any other context about the problem here. I don't understand the problem. As far as I can tell everything seemes to be working well.
gharchive/issue
2022-10-04T20:00:56
2025-04-01T06:45:40.461906
{ "authors": [ "MagnusHafstad", "einmid" ], "repo": "rubberdok/indok-web", "url": "https://github.com/rubberdok/indok-web/issues/1076", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
381855514
FirstMethodArgumentLineBreak plays poorly with arrays and hashes Maybe this is intended behavior, but it seems that Layout/FirstMethodArgumentLineBreak does nothing if the first argument is a Hash or Array literal. Additionally, I'd like the closing parameter parenthesis to be aligned consistently, but Layout/ClosingParenthesisIndentation is no help in this (perhaps I need to use a different cop?). Expected behavior Given rubocop config: AllCops: DisabledByDefault: true UseCache: false TargetRubyVersion: 2.3.4 Layout/AlignArray: Enabled: true Layout/ClosingParenthesisIndentation: Enabled: true Layout/FirstMethodArgumentLineBreak: Enabled: true Layout/IndentationWidth: Enabled: true Width: 2 Layout/MultilineMethodCallBraceLayout: Enabled: true EnforcedStyle: new_line And offending (contrived) code: begin begin sheet.add_row([ a[0], a[1], ]) end end begin begin sheet.add_row({ k1: :v1, k2: :v2 }) end end I'd expect the autofixed code to be: begin begin sheet.add_row( [ a[0], a[1], ] ) end end begin begin sheet.add_row( { k1: :v1, k2: :v2 } ) end end Actual behavior begin begin sheet.add_row([ a[0], a[1], ] ) end end begin begin sheet.add_row({ k1: :v1, k2: :v2 } ) end end RuboCop version $ rubocop -V 0.60.0 (using Parser 2.5.3.0, running on ruby 2.3.4 x86_64-darwin17) @marcotc we came across this issue today as well! It is the exact same as @Epigene where hashes and arrays are not getting new lines in method calls. Do you think that a fix will be released soon? Thanks! Additional example: sheet.add_row(k1: v1) Dunno if this would be considered a separate issue but heredocs don't seem to be considered multiline either: foo(<<~EOF here EOF ) I was writing a feature request, but found this issue already filed. I agree with above. After adding Layout/FirstMethodArgumentLineBreak: Enabled: true Layout/FirstHashElementLineBreak: Enabled: true Layout/ClosingParenthesisIndentation: Enabled: true Layout/MultilineHashBraceLayout: Enabled: true EnforcedStyle: new_line We end up with an autocorrection as follows. Before: def public_api_as_json(options = {}) as_json({ only: [:id, :nature, :deleted, :token], methods: [:definition, :description, :label, :size], }.merge(options)) end After: def public_api_as_json(options = {}) as_json({ only: [:id, :nature, :deleted, :token], methods: [:definition, :description, :label, :size], }.merge(options) ) end I would have liked to end up with def public_api_as_json(options = {}) as_json({ only: [:id, :nature, :deleted, :token], methods: [:definition, :description, :label, :size], }.merge(options)) end or def public_api_as_json(options = {}) as_json( { only: [:id, :nature, :deleted, :token], methods: [:definition, :description, :label, :size], }.merge(options) ) end The cop might be somewhat broken, rubocop does not complain about: with(:get_object, custom_audience.fb_id, fields: request_fields ). I would expect it to return: with( :get_object, custom_audience.fb_id, fields: request_fields ). still there AFAIK confirmed bug still exists in rubocop 1.2.0
gharchive/issue
2018-11-17T11:44:51
2025-04-01T06:45:40.483886
{ "authors": [ "AlexWayfer", "Epigene", "allentsai", "athornton2012", "lamont-granquist" ], "repo": "rubocop-hq/rubocop", "url": "https://github.com/rubocop-hq/rubocop/issues/6493", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
374832526
[Fix #6415] Fix Style/UnneededCondition auto-correct Fix Style/UnneededCondition auto-correct so that it generates valid syntax where the else branches contains a multiline statement without parentheses. I went with the approach of parenthesizing the whole else branch statement, but can look at other approaches if that would be better. Before submitting the PR make sure the following are checked: [x] Wrote good commit messages. [x] Commit message starts with [Fix #issue-number] (if the related issue exists). [x] Feature branch is up-to-date with master (if not - rebase it). [x] Squashed related commits together. [x] Added tests. [x] Added an entry to the Changelog if the new code introduces user-observable changes. See changelog entry format. [x] The PR relates to only one subject with a clear title and description in grammatically correct, complete sentences. [x] Run rake default. It executes all tests and RuboCop for itself, and generates the documentation. I went with the approach of parenthesizing the whole else branch statement That seems ok to me. I hope another cop can remove redundant parentheses. (This cop may not exist today.) The changes look good, but your branch has to be rebased on top of the current master branch due to merge conflicts.
gharchive/pull-request
2018-10-29T02:19:30
2025-04-01T06:45:40.489323
{ "authors": [ "bbatsov", "joeadcock", "mikegee" ], "repo": "rubocop-hq/rubocop", "url": "https://github.com/rubocop-hq/rubocop/pull/6418", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
815365536
Fix mislabelled example under Lint/DuplicateBranch The second IgnoreLiteralBranches was probably supposed to be IgnoreContentBranches :) Before submitting the PR make sure the following are checked: [x] The PR relates to only one subject with a clear title and description in grammatically correct, complete sentences. [x] Wrote good commit messages. [x] Commit message starts with [Fix #issue-number] (if the related issue exists). [x] Feature branch is up-to-date with master (if not - rebase it). [x] Squashed related commits together. [x] Added tests. [x] Ran bundle exec rake default. It executes all tests and runs RuboCop on its own code. [x] Added an entry (file) to the changelog folder named {change_type}_{change_description}.md if the new code introduces user-observable changes. See changelog entry format for details. You'll also have to fix the source from which this was generated. Thanks!
gharchive/pull-request
2021-02-24T10:58:41
2025-04-01T06:45:40.494427
{ "authors": [ "bbatsov", "unikitty37" ], "repo": "rubocop-hq/rubocop", "url": "https://github.com/rubocop-hq/rubocop/pull/9531", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1111691104
Supported Hierarchical Departments Is your feature request related to a problem? Please describe. It would be very useful for Rubocop to be able to supported hierarchical/nested departments. For example, if I have a department MyExtension/Foo and MyExtension/Bar it would be useful to be able to enable/disable MyExtension as a whole, In particular this comes up with things like Cookstyle which is a tool built on top of rubocop to do Chef correctness linting. They have many departments all of the form Chef/. It would be useful to be able to turn all of those on or off. In my particular case since for standard ruby-correctness, I tend to stick much closer to Rubocop defaults than Cookstyle defaults, but Cookstyle includes a ton of its own defaults for rubocop rules, for my cookstyle runs, I actually want to disable all rules, then enable all Chef/* rules, ala something like this: AllCops: DisabledByDefault: true Chef: Enabled: true Unfortunately today I have to list every single department manually and if they add one, I have to add it to my config. Additional context See further discussion of this in https://github.com/rubocop/rubocop/issues/9752 CC @jonas054 who wanted a tag in this.
gharchive/issue
2022-01-22T21:21:05
2025-04-01T06:45:40.517278
{ "authors": [ "jaymzh" ], "repo": "rubocop/rubocop", "url": "https://github.com/rubocop/rubocop/issues/10373", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2143485620
Layout/SpaceBeforeFirstArg is not found until after "some code" When running rubocop, it seems to miss Layout/SpaceBeforeFirstArg on the first couple of lines. Only after some other code has been analyzed, it will detect any other Layout/SpaceBeforeFirstArg Expected behavior There should have been three errors in total. It is missing two more errors: db/seeds/roles.seeds.rb:3:18: C: Layout/SpaceBeforeFirstArg: Put one space between the method name and the first argument. db/seeds/roles.seeds.rb:4:18: C: Layout/SpaceBeforeFirstArg: Put one space between the method name and the first argument. Actual behavior Rubocop output: $ rubocop db/seeds Inspecting 5 files C.... Offenses: db/seeds/roles.seeds.rb:8:18: C: Layout/SpaceBeforeFirstArg: Put one space between the method name and the first argument. Rails.logger.info"User Roles... Completed\n" Steps to reproduce the problem Example file: roles.seeds.rb # frozen_string_literal: true Rails.logger.info"\nUser Roles..." Rails.logger.info"--------------\n" user_roles = %w[admin user] Rails.logger.info"User Roles... Completed\n" user_roles.each do |name| Role.find_or_create_by(name: name) end Rails.logger.info "User Roles... Completed\n" $ rubocop db/seeds RuboCop version $ rubocop -v 0.68.1 Your version of rubocop seems quite old (almost 5 years), can you upgrade any further? I ran your example on the most recent version and am getting the 3 expected offenses so it has already been fixed somewhere inbetween. This issue has been resolved in new versions. Please upgrade to the latest RuboCop. Thank you. Thank you!
gharchive/issue
2024-02-20T03:46:09
2025-04-01T06:45:40.521078
{ "authors": [ "Earlopain", "koic", "wanchic" ], "repo": "rubocop/rubocop", "url": "https://github.com/rubocop/rubocop/issues/12704", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2068732157
[Fix #12437] Fix an infinite loop error for Style/MethodCallWithArgsParentheses Fixes #12437. This PR fixes an infinite loop error for EnforcedStyle: omit_parentheses of Style/MethodCallWithArgsParentheses with Style/SuperWithArgsParentheses. In #12390, super has become a separate Style/SuperWithArgsParentheses cop. This PR prevents the infinite loop error by ensuring that Style/MethodCallWithArgsParentheses no longer detects super, following the separation of super into Style/SuperWithArgsParentheses cop. This approach aligns with the perspective that methods and super have different considerations regarding parentheses usage, as mentioned in #12390. Before submitting the PR make sure the following are checked: [x] The PR relates to only one subject with a clear title and description in grammatically correct, complete sentences. [x] Wrote good commit messages. [x] Commit message starts with [Fix #issue-number] (if the related issue exists). [x] Feature branch is up-to-date with master (if not - rebase it). [x] Squashed related commits together. [x] Added tests. [x] Ran bundle exec rake default. It executes all tests and runs RuboCop on its own code. [x] Added an entry (file) to the changelog folder named {change_type}_{change_description}.md if the new code introduces user-observable changes. See changelog entry format for details. Looks good to me!
gharchive/pull-request
2024-01-06T17:19:40
2025-04-01T06:45:40.527071
{ "authors": [ "bbatsov", "koic" ], "repo": "rubocop/rubocop", "url": "https://github.com/rubocop/rubocop/pull/12595", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
131584664
present_collection issue Hello, Reading http://www.rubydoc.info/github/intridea/grape-entity/Grape%2FEntity.present_collection and I think there might be a bug in documentation. Definition class Users < Grape::Entity present_collection true expose :items, as: 'users', using: API::Entities::Users end Usage module API class Users < Grape::API version 'v2' # this will render { "users" : [ { "id" : "1" }, { "id" : "2" } ], "version" : "v2" } get '/users' do @users = User.all present @users, with: API::Entities::Users end end Shouldn't expose :items, as: 'users', using: API::Entities::Users have singular class name? Now I get stack level too deep error, but when I change from Users to User, it returns null. Please advise. Thank you My code: api/events.rb desc 'Recently booked' get 'recently_booked' do events = Event.active present events, with: API::Entities::Events end api/entities/events.rb module API module Entities class Events < Grape::Entity present_collection true expose :location expose :events, using: API::Entities::Events end end end Ruby: 2.3.0 Rails: 4.2.5.1 Gems: 2.5.1 $ bundle | grep 'grape' Using grape-entity 0.4.8 Using grape 0.14.0 Using grape-swagger 0.10.4 Using grape-kaminari 0.1.8 You have a class Users that exposes items as itself it seems, that is probably why? I have a singular entity for Event. First, I think you're right there's a bug in the documentation, but I didn't try. In your code you cannot expose :events as API::Entities::Events because that's the same class :) You should have another singular event. First, lets fix this here. Take the README example and turn it into a spec here, if it doesn't already exist. Fix the spec and the README example, make a pull request. I can help with that if you get stuck. :+1: I just came here to bump the issue: documentation states expose :items, as: 'users', using: API::Entities::Users in plural even though it should refer to API::Entities::User (singular).
gharchive/issue
2016-02-05T08:31:29
2025-04-01T06:45:40.536101
{ "authors": [ "dblock", "giedriusr", "pre" ], "repo": "ruby-grape/grape-entity", "url": "https://github.com/ruby-grape/grape-entity/issues/207", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
153597147
Representers This PR contains: allow to attach custom model parser via GrapeSwagger.register_model_parser trailblazer representers support model parsers moved to separated classes grape-entity not a runtime dependency TODO: changelog update README Also related: https://github.com/ruby-grape/grape-swagger/issues/121 This allows to create gem with custom model parsers, like representable, roar, grape-entity @dblock Overall, big 👍 on this. Looking forward to seeing this finished. We could take grape-entity and representable parts out of grape-swagger altogether into separate gems instead of keeping support here or not. @dblock im add model parsers collection, but many specs uses grape entities in test. I create two gems: https://github.com/Bugagazavr/grape-swagger-representable https://github.com/Bugagazavr/grape-swagger-entity Should i publish this gems before this merged? I think you should and make it very clear in UPGRADING in grape-swagger what users are supposed to do if they use one or the other. I would move specific tests into those projects and leave some basic tests that work for either library in this PR. You could also have a .travis.yml that sets something like ENTITY_LIBRARY and runs "integration" tests with it, a little bit how https://github.com/dblock/slack-ruby-client for example switches between faye-websocket and celluloid to test. We don't have to, but if you would like to move grape-swagger-* into the ruby-grape organization we can do that too. I can make you co-maintainer of all three, grape-swagger and those two. @dblock ok, im agreed. I squash my commits, add some information to readme and update changelog. For specs uses grape-swagger-entity .travis.yml also updated in https://github.com/Bugagazavr/grape-swagger-representable and https://github.com/Bugagazavr/grape-swagger-entity So, now if this this is not problem, we can move this repos to ruby-grape namespace, and delete git ref from Gemfiles to my grape-swagger and add dependencies to gempecs ah, ok i forgot add ENTITY_LIBRARY, take a moment i add this This is very close, see my smallish comments. I don't think we should treat grape-entity as "default" in specs basically. In the end travis would run general specs for many versions of Ruby and only once against each entity library on top of the latest ruby version or something like that. I'm going to cut a release of grape-swagger with whatever changes we have on HEAD now FYI, that should give us more time to announce/release this big change. Ok, i create a simple mock shared example to launch tests without any entity library, and add 2 shared examples to tests grape-swagger with representable and grape-entity. I made a bunch of nitpick comments in README. Could you please update https://github.com/ruby-grape/grape-swagger/blob/master/UPGRADING.md? I'll merge then. Thanks for the excellent work. @dblock done Fix the build pls. Will hit merge on 💚. I see https://github.com/ruby-grape/grape-swagger/pull/415, this is cool, I'll just merge and deal with it. Feel free to address any of my minor comments in a future PR. You should e-mail the grape mailing list about this change asking people to give it a shot. Also release the representer libraries and change the MODEL_PARSER env to the gem name instead of a github reference. Thanks 👍 https://rubygems.org/gems/grape-swagger-entity https://rubygems.org/gems/grape-swagger-representable What about move this gems to ruby-grape org? @Bugagazavr lets do it offline @Bugagazavr email me dblock at dblock dot org
gharchive/pull-request
2016-05-07T14:19:48
2025-04-01T06:45:40.550423
{ "authors": [ "Bugagazavr", "dblock" ], "repo": "ruby-grape/grape-swagger", "url": "https://github.com/ruby-grape/grape-swagger/pull/413", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
107830253
Strict Validation breaks 'oneOf' on arrays When testing an against an array that has the "oneOf" property on its items, it breaks the "required" field on the related "oneOf" objects when strict: true option is provided. Test case is presented below: require 'json-schema' schema = { "$schema": "http://json-schema.org/draft-04/schema#", "type": "array", "items": { "oneOf": [ { "type": "object", "properties": { "id": { "type": "number" }, "gender": { "type": "string" }, "type": { "type": "string" }, "name": { "type": "string" } }, "required": ["id", "type", "name"] }, { "type": "object", "properties": { "id": { "type": "number" }, "color": { "type": "string" }, "shape": { "type": "string" }, "type": { "type": "string" } }, "required": ["id", "type", "color"] } ] } } data = [ { "id": 1, "type": "person", "name": "david" }, { "id": 2, "type": "thing", "shape": "square", "color": "blue" } ] puts "without strict:" puts JSON::Validator.fully_validate(schema, data) # [] (passes validation) puts "with strict:" puts JSON::Validator.fully_validate(schema, data, strict: true) # The property '#/0' of type Hash did not match any of the required schemas. The schema specific errors were: # # - oneOf #0: # - The property '#/0' did not contain a required property of 'gender' # - oneOf #1: # - The property '#/0' did not contain a required property of 'color' # - The property '#/0' contained undefined properties: 'name' # - The property '#/0' did not contain a required property of 'shape' # - The property '#/0' did not contain a required property of 'color' It is failing because the first object does not contain "gender" but gender is not a required property. The schema and data are valid though when tested elsewhere., e.g. if I plug it in here. I'm having the same issue with "allOf" property. I'm having this issue with "oneOf" as well. Any way we could ignore the strict: true option if the schema explicitly sets "additionalProperties": true ? Yes, that's right. Strict validation is a non-standard feature that this gem provides but is not a part of the json schema spec. I'd recommend trying '"additionalProperties": false' (not '"additionalProperties": true') as a workaround. Long term I'm not sure if we will continue to support the strict option. @iainbeeston doesn't seem like additionalProperties will help. Look here for explanation why it's not so good for allOf. I was able to fix my issue with 'allOf' by setting additionalProterties=true on all the allOf hashes. The strict:true option is getting overridden by it (which is a good thing). @iainbeeston doesn't seem like additionalProperties will help. Look here for explanation why it's not so good for allOf. Thank you for this reference, this made it very clear.
gharchive/issue
2015-09-23T00:30:56
2025-04-01T06:45:40.556813
{ "authors": [ "bopm", "danascheider", "davidruizrodri", "iainbeeston", "mcordell", "robinsingh-bw", "smridge" ], "repo": "ruby-json-schema/json-schema", "url": "https://github.com/ruby-json-schema/json-schema/issues/266", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
37877
Strip parameters when determing whether two URLs refer to the same service See http://groups.google.com/group/rubycas-server/browse_thread/thread/707d81a7fe002024?hl=en Ex: http://example.com?foo=1 and http://example.com?foo=2 both refer to the same service, yet ServiceTicket.matches_service? would incorreclty identify them as different services. Need to modify ServiceTicket.matches_service? (lib/casserver/models.rb:65) so that URL parameters are stripped off both URLs being compared. Does this bug stills alive? Yes. http://example.com?foo=1 and http://example.com?foo=2 would still be considered different services. I think the right way to address this would be to make the strict behaviour optional. i.e. add a config setting like "ignore_query_in_service_url" and make it true by default.
gharchive/issue
2009-07-17T20:52:02
2025-04-01T06:45:40.568338
{ "authors": [ "brodock", "zuk" ], "repo": "rubycas/rubycas-server", "url": "https://github.com/rubycas/rubycas-server/issues/5", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
1456953811
Display of no-reply emails (sender name and warning) Summary Unfortunately response to this email are not currently being bounced, and people are still responding. All emails from no-reply@humanessentials.org should have their "Do Not Reply" messages emphasized. Things to consider No response Criteria for Completion [ ] Beginning of the body of all emails from this address should have succinct, eye-catching(?) text warning noting that replies will not be received. [x] Sender name of email is "Please do not reply to this email as this mail box is not monitored—Human Essentials" (or close to that if there are technical limitations on chars/length) Suggest that we change the display name of "no-reply@humanessntials.org" to "Please do not reply to this email as this mail box is not monitored — Human Essentials” Per @cielf's comment, we think the display name change is an app code change rather than a config change with our email vendor. Submitted a PR for the sender name portion. #3288 Will close this since we've addressed it in the PR linked. @edwinthinks -- Technically, we've only covered part of it. But I'm willing to wait and see if it reduces the problem before going in and making further changes.
gharchive/issue
2022-11-20T16:03:27
2025-04-01T06:45:40.574678
{ "authors": [ "cielf", "edwinthinks", "scooter-dangle" ], "repo": "rubyforgood/human-essentials", "url": "https://github.com/rubyforgood/human-essentials/issues/3256", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
91721108
Is it possible to provide features like downloading attachments? Is it possible to provide features like downloading attachments? Is it possible to configure my own server using mailsac open source with my own domain and then have some option to download or see attachments? Thanks! @rahulmr mailpraser does the job, but this is something awaits for implementation, maybe v2? Thanks for the update. Please do let me know once you have implemented it in v2 :) Sorry guys I have been tied up and keep meaning to give mailsac love to release the latest. but that has not happened. If you need a quick-and-dirty way to grab attachments: https://github.com/ruffrey/mailsac/blob/master/lib/mailserver.js#L117 after mailparser gets it, the attachments might be sitting right there as base64, but we are (at this time) intentionally dropping them. you could insert them into mongodb as a base64 then do the work to render them on your end. From an express route, something like: res.set('content-type', 'whatever/attachment-type'); res.send(new Buffer(attachmentAsBase64, 'base64')). Thanks but I am just an end user :( so do not know how to do what you have suggested. Where to put the res.set on which line etc. Sorry. @ruffrey it seems code is changed now. Could you please suggest a better way to get the attachment? Thanks
gharchive/issue
2015-06-29T07:18:36
2025-04-01T06:45:40.680613
{ "authors": [ "lifehome", "rahulmr", "ruffrey" ], "repo": "ruffrey/mailsac", "url": "https://github.com/ruffrey/mailsac/issues/14", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
1650625146
Update sbt-coveralls to 1.3.7 About this PR 📦 Updates org.scoverage:sbt-coveralls from 1.3.5 to 1.3.7 📜 GitHub Release Notes - Version Diff Usage ✅ Please merge! I'll automatically update this PR to resolve conflicts as long as you don't change it yourself. If you'd like to skip this version, you can just close this PR. If you have any feedback, just mention me in the comments below. Configure Scala Steward for your repository with a .scala-steward.conf file. Have a fantastic day writing Scala! ⚙ Adjust future updates Add this to your .scala-steward.conf file to ignore future updates of this dependency: updates.ignore = [ { groupId = "org.scoverage", artifactId = "sbt-coveralls" } ] Or, add this to slow down future updates of this dependency: dependencyOverrides = [{ pullRequests = { frequency = "30 days" }, dependency = { groupId = "org.scoverage", artifactId = "sbt-coveralls" } }] labels: sbt-plugin-update, early-semver-patch, semver-spec-patch, commit-count:1 Superseded by #147.
gharchive/pull-request
2023-04-01T20:02:33
2025-04-01T06:45:40.691868
{ "authors": [ "scala-steward" ], "repo": "ruippeixotog/akka-testkit-specs2", "url": "https://github.com/ruippeixotog/akka-testkit-specs2/pull/146", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
2252188134
feat: support jina ai embedding and reranker Support for Jina AI Embedding and Reranker via API: Integrated Jina AI's embedding and reranking features through the API. Ensured compatibility and optimized performance for real-time data processing. Added documentation to guide users on how to utilize the new Jina AI embedding and reranking features. @ZiniuYu Looks great, thanks. Can you call pnpx changeset and add a changeset? Then your change will end up in our changelog.md file @marcusschiesser Thanks for reviewing! The changeset is added
gharchive/pull-request
2024-04-19T06:31:05
2025-04-01T06:45:40.699497
{ "authors": [ "ZiniuYu", "marcusschiesser" ], "repo": "run-llama/LlamaIndexTS", "url": "https://github.com/run-llama/LlamaIndexTS/pull/734", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2033752560
[Bug]: OpenAI agent with query engine tools crashes while calling the tool Bug Description I setup an openai agent with query engine tools The agent was able to start and enter the chat repl But it fails as soon as you submit a query (The editor also warns that agent code doesn't see query engine tools as a subclass of BaseTools) Version 0.9.13 Steps to Reproduce setup openai agent with query tools engine as per this documentation https://docs.llamaindex.ai/en/stable/examples/agent/openai_agent_with_query_engine.html Run the agent - agent.chat_repl() Submit a query Relevant Logs/Tracbacks raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "'Tesla April 2022 Valuation' does not match '^[a-zA-Z0-9_-]{1,64}$' - 'tools.0.function.name'", 'type': 'invalid_request_error', 'param': None, 'code': None}} As you can see from the error, the name you gave your tool is invalid It can only be alpha-numeric with dashes or underscores, but your tool name has spaces
gharchive/issue
2023-12-09T09:23:06
2025-04-01T06:45:40.703541
{ "authors": [ "logan-markewich", "vaibhavp4" ], "repo": "run-llama/llama_index", "url": "https://github.com/run-llama/llama_index/issues/9411", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
296972344
Internal control plane components should only bind to localhost, not all interfaces Check out https://github.com/runconduit/conduit/blob/master/BUILD.md#components. Only the public-api and proxy-api components of the control plane should bind to non-localhost interfaces. In all other cases, all of these address references should be prefixed with 127.0.0.1, e.g. -addr=127.0.0.1:8089. This is important to keep the private parts of the control plane inaccessible from the rest of the cluster since we don't want to use TLS for this within-pod communication. args: - "destination" - "-addr=:8089" args: - "proxy-api" - "-destination-addr=:8089" - "-telemetry-addr=:8087" args: - "tap" - "-addr=:8088" args: - "telemetry" - "-addr=:8087" We should have a test that only the public API port (8085), the proxy API port (8086), and the metrics ports (unless/until #351 is implemented) are accessible from outside the pod and that the other ports mentioned above (at least) are not accessible from outside the pod. We have a for-local-development-purposes-only docker-compose environment that might need to be modified to support this. In particular, IIUC docker-compose by default gives a separate namespace to every container, in order for them all to communicate with each other via loopback, which would be required if they all bind only to loopback. Ideally, as far as minimizing attack surface, each service would be hard-coded to bind only to loopback interfaces. At the very least, the default (when -addr isn't supplied) should be to bind to loopback interfaces instead of defaulting to all interfaces. We can defer any work to supporting binding to IPv6 loopback interfaces to #91. To test, it is sufficient to verify that one service that depends on the control plane (e.g. "public-api" or "proxy-api") successfully responds to a request from outside the pod and that connections are dropped for all other ports used used by the internal control plane containers. We'd need to keep the list of ports used by the control plane in cli/cmd/install.go in sync with the list of ports tested by the test.
gharchive/issue
2018-02-14T04:30:48
2025-04-01T06:45:40.711153
{ "authors": [ "briansmith" ], "repo": "runconduit/conduit", "url": "https://github.com/runconduit/conduit/issues/353", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1278324297
Create dice Adds plugin to introduce on-screen dice rolls to the game for a table-top like experience. This does not truly remove the widget from the parent: https://github.com/bogstandard/dice/blob/5841a593b16937fa087c0b2517ff74a7354a4857/src/main/java/com/dice/DicePlugin.java#L101 You can use Widget#getChildren -> copy the array without your widget -> Widget#setChildren just parent.getChildren()[child.getIndex()] = null; Thanks both, hopefully that's done it? I assumed that the arrays were working by reference to their child objects, so nullifying the child would in-turn cause the reference within the array to be nullified too. I assumed that the arrays were working by reference to their child objects, so nullifying the child would in-turn cause the reference within the array to be nullified too. fwiw, java objects are "by-reference" but those references are not shared, so setting some variable/field to null will not also null out other references to the object i.e. in an array somewhere or in another variable
gharchive/pull-request
2022-06-21T11:42:22
2025-04-01T06:45:40.763504
{ "authors": [ "LlemonDuck", "abextm", "bogstandard" ], "repo": "runelite/plugin-hub", "url": "https://github.com/runelite/plugin-hub/pull/2868", "license": "BSD-2-Clause", "license_type": "permissive", "license_source": "github-api" }
313840366
Right Click "Release Item" edits tag rather than removes Right clicking on remove item in bank to remove seems to just prompt to edit? Goes to this -> Fixed in #1440 @Adam- Will a new release be pushed to fix this behavior, or will players encountering this issue need to disable the plugin until next Thursday? Adam's working on pushing this out as a hotfix today, far as I can tell.
gharchive/issue
2018-04-12T18:34:11
2025-04-01T06:45:40.765810
{ "authors": [ "Adam-", "GETrackerDan", "Nightfirecat", "SoyChai" ], "repo": "runelite/runelite", "url": "https://github.com/runelite/runelite/issues/1443", "license": "BSD-2-Clause", "license_type": "permissive", "license_source": "github-api" }
325765240
[issue] buying offer is at 7/10 yet the progress bar shows it's 1/10. https://imgur.com/a/QOKM05k fix pls looks fixed yeah. if this will be still a problem, reopen after tomorrows release.
gharchive/issue
2018-05-23T15:42:28
2025-04-01T06:45:40.767059
{ "authors": [ "deathbeam", "yaze1" ], "repo": "runelite/runelite", "url": "https://github.com/runelite/runelite/issues/3155", "license": "BSD-2-Clause", "license_type": "permissive", "license_source": "github-api" }
337695322
Looting Bag Info Is your feature request related to a problem? Please describe. N/A Describe the solution you'd like Requesting an overlay on the looting bag in the inventory that shows the total value of the items in the bag. Additionally showing how many spaces are used/free (x/28) would be nice too. Describe alternatives you've considered N/A Additional context N/A The Looting Bag hub plugin looks like it does this.
gharchive/issue
2018-07-02T23:25:13
2025-04-01T06:45:40.769530
{ "authors": [ "Nightfirecat", "VioRS" ], "repo": "runelite/runelite", "url": "https://github.com/runelite/runelite/issues/4143", "license": "BSD-2-Clause", "license_type": "permissive", "license_source": "github-api" }
1176108738
Add abyssal whip (or) to count as abbysal whip in the emote clue overlay Add abyssal whip (or) to count as abbysal whip in the emote clue overlay This should probably also be added to the master sherlock step. Added in other commit
gharchive/pull-request
2022-03-22T00:18:41
2025-04-01T06:45:40.770580
{ "authors": [ "Adam-", "emielv" ], "repo": "runelite/runelite", "url": "https://github.com/runelite/runelite/pull/14777", "license": "BSD-2-Clause", "license_type": "permissive", "license_source": "github-api" }
442952847
ToB Damage Counter The plugin counts the damage you have done and will add in the extra healing the boss have done to itself. After the fight or after your death it will print out a message the amount of damage you have done in % form. Along with that it has an option to change the raid party setup to 3 to 5 man and few chat message to appear or not. The default value is 4 (group members in total) since its the most common group to do Might include the total amount of damage you have done to the boss but will be set as "false" as a default value Healing; Maiden is the only boss that is true healing, does not go off percent base. The other two bosses does go off percentage healing. Output: Options: I like the idea of a damage counter (this is basically #3894). However I don't like this approach. Specifically: this should use the party system to share damages between clients this should be able to easily support other things in the future (probably just cox?) this should support any size tob party, not just 5 man Which party system the runelite function one or the ingame one? Should be supportable between raids It should be supportable by any size, currently figuring out to grab the party system and making it automatic Can anyone use this? The RuneLite one. I made a start at https://github.com/Adam-/runelite/tree/dps which works for simple bosses like corp. I tested this and it's pretty neat. One thing you haven't taken into account is that Verzik is also healing when spawning crabs and during blood phaze. Maiden aswell ofc. I have accounted the healing for all the bosses. Maiden is the actually amount, nylo and verzik is percentage base
gharchive/pull-request
2019-05-11T04:32:18
2025-04-01T06:45:40.775900
{ "authors": [ "Adam-", "TheMursk", "akarhi", "pvmgod123" ], "repo": "runelite/runelite", "url": "https://github.com/runelite/runelite/pull/8803", "license": "BSD-2-Clause", "license_type": "permissive", "license_source": "github-api" }
1460522316
Cannot see finch VM status What is the problem you're trying to solve?. I want to check the status of the VM that finch has started. Describe the feature you'd like finch vm should have a status command to see the vm status. Usage: finch vm [command] Available Commands: init Initialize the virtual machine remove Remove the virtual machine instance start Start the virtual machine stop Stop the virtual machine Add status command, status Status of the virtual machine Workaround: $ LIMA_HOME=/Applications/Finch/lima/data /Applications/Finch/lima/bin/limactl ls NAME STATUS SSH ARCH CPUS MEMORY DISK DIR finch Running 127.0.0.1:56835 x86_64 2 8GiB 100GiB /Applications/Finch/lima/data/finch Hey, I would like to help out with this one and started implementing it over in a fork. One thing that I found which should be handled in a different way than with printing the output via sva.logger.Infof("%s", logs) is the output of lima's ls command. Because this way it is formatted in a wrong way as you can see below: INFO[0000] NAME STATUS SSH ARCH CPUS MEMORY DISK DIR finch Running 127.0.0.1:51977 aarch64 2 4GiB 100GiB /Users/nm/Code/oss/finch/_output/lima/data/finch Is there a custom logger available to print it in a right way? Or just use the regular fmt.Printf? Since I'm pretty new to Golang I could also need a little help with testing the new command. So I am really open for help and advice. @niklasmtj Thanks! There is an existing internal method to get the lima VM status. Maybe the new command can wrap this internal method and do the proper translation. You can add new e2e tests here and run "make test-e2e" to test it. Thanks for assigning me. It seems that I did not understand correctly at the beginning of this issue. Since I thought this status command should log all the information about the vm, more on this at the end. Anyway, I have now used the internal lima.GetVMStatus method to translate the different vmStatus appropriately. As follows: func (sva *statusVMAction) run() error { status, err := lima.GetVMStatus(sva.creator, sva.logger, limaInstanceName) if err != nil { return err } switch status { case lima.Running: sva.logger.Infof("the instance %q is running", limaInstanceName) return nil case lima.Nonexistent: return fmt.Errorf("the instance %q does not exist", limaInstanceName) case lima.Stopped: return fmt.Errorf("the instance %q is stopped. run `finch %s start` to start the instance", limaInstanceName, virtualMachineRootCmd) case lima.Unknown: return fmt.Errorf("the instance status of %q is unknown", limaInstanceName) default: return fmt.Errorf("the instance %q gave a not defined status", limaInstanceName) } } Before adding (e2e) and tests in general I would like to get the method right. Does this seem like the wrapper command you though of? In the course of this issue, I would also add the output of lima's ls in general for a subsequent feature to get the information about the vm such as CPU, memory etc. mentioned in the output in my previous comment. My understanding is that VM status (running/stopped/nonexistent) is the major request of this issue. We could add more information like CPU/memory later. @shaonm can correct me if I'm wrong. About the translation, I think the command "finch vm status", could just objectively return whatever the status is as "Running"/"Stopped"/"Nonexistent", instead of returning Error when the vm is not running. Returning Error when the vm is not running implies it has assumption that the vm should be running, which is not an assumption for the "finch vm status". Yeah you're right about the assumptions. This shouldn't be the case when one just want to check if it's available or not. Will update it with only the logging info. Thanks for the feedback! About "logging info", I feel the status should be printed in stdout instead of stderr as it is the actual result instead of diagnostic information. (Difference between them) Logger prints to stderr by default and normally prints the diagnostic information. We can potentially use the Stdout() in stdlib.go. There are multiple ways of using it. We can discuss the details in PR.
gharchive/issue
2022-11-22T20:39:47
2025-04-01T06:45:40.785604
{ "authors": [ "AkihiroSuda", "niklasmtj", "ningziwen", "shaonm" ], "repo": "runfinch/finch", "url": "https://github.com/runfinch/finch/issues/23", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1901341941
Separate the upload-related info to a separate uploadInfo.yaml this eliminates the need to manually merge the information downloaded from the platform with the other settings configured in workspaceInfo.yaml, especially when you need to re-download the upload info from the platform to get a new valid token. for the time being the run.py CLI tool still supports getting the upload info from workspaceInfo.yaml to not break any existing setups. Should be able to remove that after a while though when everything should have been updated to use uploadInfo.yaml. if the same setting (e.g. workspaceName) is specified in both workspaceInfo.yaml and uploadInfo.yaml, the one from uploadInfo.yaml takes precedence. I think this should work naturally if a user converts over from pure RunWhen Local operation (i.e. without an uploadInfo.yaml file) to wanting to tether and upload to the platform. They would just create the workspace in the GUI, download the uploadInfo.yaml and then rerun the workspace builder. The real workspace name that exists in the platform that's in the upload info will replace whatever placeholder workspace name they had been using in the workspace info file. I took the level-of-detail-related settings from uploadInfo.yaml, since they're not really download-related and should be specified in workspaceInfo.yaml. Closes https://github.com/runwhen-contrib/runwhen-local/issues/321
gharchive/pull-request
2023-09-18T16:41:29
2025-04-01T06:45:40.791471
{ "authors": [ "stewartshea", "vaterlaus" ], "repo": "runwhen-contrib/runwhen-local", "url": "https://github.com/runwhen-contrib/runwhen-local/pull/330", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
316899658
Custom Dataset Evaluation either 0.0 or nan Hi, I have a custom dataset set up like VOC. I have trained successfully, but when I evaluate, however, I get either nan or 0.00 depending o whether or not I use the 07 metric or not. use_metric_07=True gives me 0.00, and use_metric_07=False gives me nan. Any suggestions on possible solutions? Outputs below: For True: Saving cached annotations to /home/vectorweb3/pytorch-faster-rcnn/data/smlchunksdevkitsml/smlchunkssml/ImageSets/Main/test.txt_annots.pkl /home/vectorweb3/pytorch-faster-rcnn/tools/../lib/datasets/smlchunks_eval.py:205: RuntimeWarning: invalid value encountered in true_divide rec = tp / float(npos) /home/vectorweb3/pytorch-faster-rcnn/tools/../lib/datasets/smlchunks_eval.py:42: RuntimeWarning: invalid value encountered in greater_equal if np.sum(rec >= t) == 0: AP for bre = 0.0000 AP for lymphocyte = 0.0000 Mean AP = 0.0000 ~~~~~~~~ Results: 0.000 0.000 0.000 ~~~~~~~~ For False: Evaluating detections Writing bre smlchunks results file Writing lymphocyte smlchunks results file smlchunks07 metric? No /home/vectorweb3/pytorch-faster-rcnn/tools/../lib/datasets/smlchunks_eval.py:205: RuntimeWarning: invalid value encountered in true_divide rec = tp / float(npos) AP for bre = nan AP for lymphocyte = nan Mean AP = nan ~~~~~~~~ Results: nan nan nan ~~~~~~~~ I meet a problem that when I train the model in 140 iters , the rpn_loss_box will be NAN so as to total_loss. I use VisDrone Dataset and transform it to the VOC format. iter: 100 / 70000, total loss: 1.238060 rpn_loss_cls: 0.503647 rpn_loss_box: 0.560238 loss_cls: 0.128045 loss_box: 0.046130 lr: 0.001000 speed: 0.645s / iter iter: 120 / 70000, total loss: 1.312589 rpn_loss_cls: 0.471951 rpn_loss_box: 0.668822 loss_cls: 0.135918 loss_box: 0.035899 lr: 0.001000 speed: 0.626s / iter Traceback (most recent call last): File "./tools/trainval_net.py", line 138, in max_iters=args.max_iters) File "/data/huaxhuan/FasterRCNN/pytorch-faster-rcnn/tools/../lib/model/train_val.py", line 379, in train_net sw.train_model(max_iters) File "/data/huaxhuan/FasterRCNN/pytorch-faster-rcnn/tools/../lib/model/train_val.py", line 267, in train_model self.net.train_step(blobs, self.optimizer) File "/data/huaxhuan/FasterRCNN/pytorch-faster-rcnn/tools/../lib/nets/network.py", line 447, in train_step self.forward(blobs['data'], blobs['im_info'], blobs['gt_boxes']) File "/data/huaxhuan/FasterRCNN/pytorch-faster-rcnn/tools/../lib/nets/network.py", line 396, in forward self._add_losses() # compute losses File "/data/huaxhuan/FasterRCNN/pytorch-faster-rcnn/tools/../lib/nets/network.py", line 198, in _add_losses assert (rpn_bbox_pred==rpn_bbox_pred).all() AssertionError @lcf000000 @TVXQ20031226 @AdamGoodwin617 : I ran into the same problem. What fixed it for me was changing all the class names in the xml files to lowercases, but YMMV.
gharchive/issue
2018-04-23T17:02:16
2025-04-01T06:45:40.800306
{ "authors": [ "AdamGoodwin617", "CoderHHX", "mrxiaohe" ], "repo": "ruotianluo/pytorch-faster-rcnn", "url": "https://github.com/ruotianluo/pytorch-faster-rcnn/issues/84", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1923981783
Fixed - scroll issue on categories screen - 1736 ##Fixes Issue closes #1736 Changes proposed Removed class py-2 from footer which was causing the issue. Screenshots ##Note to reviewers Please check it @Anmol-Baranwal @rupali-codes @ketansaresa Please stick to the PR template, there are specific keywords for linking the issue to the PR like fixes or closes which you can read at official docs. Here it is not linked. This will most likely keep the issue open even if the PR is merged, so just keep in mind for future purpose. I'm doing it for you this time. Thanks for contributing @ketansaresa! 😊 It was wonderful working with you! 😄 If you want to stay updated on LinksHub, join our Discord community! :) We'd love to have you! :)
gharchive/pull-request
2023-10-03T12:05:02
2025-04-01T06:45:40.804562
{ "authors": [ "Anmol-Baranwal", "CBID2", "ketansaresa" ], "repo": "rupali-codes/LinksHub", "url": "https://github.com/rupali-codes/LinksHub/pull/1772", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
328259352
latest hunter version doesn't work for android-studio I've read Brief overview section and do understand basic concepts. [Yes] I've read F.A.Q. section and there is no solution to my problem there. [Yes] I've read Code of Conduct, I promise to be polite and will do my best at being constructive. [Yes] I've read Reporting bugs section carefully. [Yes] I've checked that all the hunter_add_package/find_package API used by me in example is the same as in documentation. [Yes] I'm using latest Hunter URL/SHA1. [Yes] I've created SSCCE GitHub repository to reproduce the issue: https://github.com/dryganets/hunter-android-studio/tree/sergeyd/new-hunter-not-work-with-android-studio you could check that master with default cmake works. external-cmake branch works as well (upgrade to 3.11.2 alone). log: Android API (CMAKE_SYSTEM_VERSION) Expected: `24`, `21`, `19`, `16` Got: `1`" Turns out boost resets CMAKE_SYSTEM_VERSION to 1 ~/.hunter 11:49 $ ag CMAKE_SYSTEM_VERSION _Base/a47baf7/4c741bc/e11a692/Build/Boost/Build/CMakeFiles/3.11.2/CMakeSystem.cmake 10:set(CMAKE_SYSTEM_VERSION "17.5.0") _Base/a47baf7/99b5517/a69be9e/Build/Boost/__system/Build/CMakeFiles/3.6.0-rc2/CMakeSystem.cmake 10:set(CMAKE_SYSTEM_VERSION "1") _Base/f91a01c/efc1666/5db38ea/Build/Boost/Build/CMakeFiles/3.11.2/CMakeSystem.cmake 10:set(CMAKE_SYSTEM_VERSION "1") I've checked that the first error in logs IS NOT external.build.failed. [Yes] I'm building on [OSX| Android]. [I'm using system CMake] CMake version: 3.11.2 I'm using unmodified toolchain from Polly I'm using next command line on generate step: gradlew assembleDebug See this issue for starting with Android Studio: https://github.com/ruslo/hunter/issues/618#issuecomment-367532376 And this example for latest updates: https://github.com/forexample/android-studio-with-hunter
gharchive/issue
2018-05-31T19:02:58
2025-04-01T06:45:40.844038
{ "authors": [ "dryganets", "ruslo" ], "repo": "ruslo/hunter", "url": "https://github.com/ruslo/hunter/issues/1459", "license": "bsd-2-clause", "license_type": "permissive", "license_source": "bigquery" }
185121436
update doc with new hunter archive following issue 566 https://github.com/ruslo/hunter/issues/566 Applied: https://docs.hunter.sh/en/latest/quick-start/boost-components.html
gharchive/pull-request
2016-10-25T13:42:57
2025-04-01T06:45:40.845698
{ "authors": [ "Ubiquite", "ruslo" ], "repo": "ruslo/hunter", "url": "https://github.com/ruslo/hunter/pull/568", "license": "bsd-2-clause", "license_type": "permissive", "license_source": "bigquery" }
196166391
Add cloudformation support. This was surprisingly easy to do: nice work rusoto developers. Only had to make a single change: Make sure the struct field serializers/deserializers call generate_field_name, which seems to be called when generating the actual struct fields. Thank you for the PR! Unfortunately, per the failed CI builds, those imports are, in fact, required. They're just not required by the cloudformation service (but by other services). Could you please add those imports back in and re-push to your branch? Done. I did think they'd be used by other services, really should have been more thorough in checking. LGTM. @matthewkmayer would you mind running the integration tests? LGTM also. I'm running the integration tests now, though I do notice there are none for the newly added cloudformation package. Given that @matthewkmayer and I are the only ones who tend to run them, I'd be happy to open an issue for it and add some after this PR is merged, assuming the tests that are running now find no regressions. Integration tests look good. I've opened #484 so adding new integration tests for this (and adding it to the list of supported services) doesn't get lost. Thanks @obmarg !
gharchive/pull-request
2016-12-16T22:17:23
2025-04-01T06:45:40.849599
{ "authors": [ "adimarco", "indiv0", "obmarg" ], "repo": "rusoto/rusoto", "url": "https://github.com/rusoto/rusoto/pull/483", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
577342860
Incorrect syntax error &mut { [] }[mid..]; // ^ // Syntax Error: expected SEMI // expected one of `.`, `;`, `?`, `}`, or an operator, found `[` // expected one of `.`, `;`, `?`, `}`, or an operator Hm, I'd say this works as expected? Our error message ("expected SEMI") is not exactly ideal, but don't generally produce nice syntax error messages at the moment at all . This is actually correct syntax. I believe I found this in libcore. Hm, it gives syntax error to me: https://play.rust-lang.org/?version=nightly&mode=debug&edition=2018&gist=f5a00ddfcd0f13aa1021053a30d56d15 I missed the let v = prefix. The original can be found at https://github.com/rust-lang/rust/blob/04e69e4f4234beb4f12cc76dcc53e2cc4247a9be/src/libcore/slice/sort.rs#L658 Ok, see the bug now, thanks for the clarification!
gharchive/issue
2020-03-07T14:34:05
2025-04-01T06:45:40.857540
{ "authors": [ "bjorn3", "matklad" ], "repo": "rust-analyzer/rust-analyzer", "url": "https://github.com/rust-analyzer/rust-analyzer/issues/3512", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1117102151
feat: Support #![recursion_limit] attribute Resolves #8640 @matklad thanks, for the instructions, they were very helpful :) :heart_eyes: does this fix https://github.com/rust-analyzer/rust-analyzer/issues/4243#issuecomment-1013306835 too? I think you need to rebase upon master to fix the formatting error. @lnicola async-std is still broken, it seems. Not sure what exactly happens to that cursed macro, but yeah, next is still unresolved by RA. // 5 <-- increment this if you've looked at `async_std::extension_trait!` without figuring out why it fails It actually starts working for me if I increase the limit: But it's sooooooooo slow :cry:. I didn't test your PR yet. Yeah, your PR fixes #4243 for me, at least in the tcp-echo example. @lnicola wait 🤔 If RA ignores the limit (without this PR anyway) how can increasing it help? And since cargo check works I'd assume that the limit is already high enough... Yeah, your PR fixes #4243 for me, at least in the tcp-echo example. The weird part is that it doesn't for me, lol. I wander what's up with that 🤔 RA used to bail out after 128 expansions, now your PR makes it run to the end. bors r+
gharchive/pull-request
2022-01-28T07:55:53
2025-04-01T06:45:40.863245
{ "authors": [ "Veykril", "WaffleLapkin", "lnicola" ], "repo": "rust-analyzer/rust-analyzer", "url": "https://github.com/rust-analyzer/rust-analyzer/pull/11360", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1973727200
unicode_normalization enabled in no-std This PR enables the unicode-normalization in no-std environments. It also allows fn to_entropy() -> Vec method in no-std. Some formatting sneaked in, too. Maybe I should introduce alloc feature and gate unicode-normalization and to_entropy with it? Otherwise, looks good! Thanks. Thanks for fixing it up! Can you squash the fixes into the patches that they fix please. We don't squash when merging so each patch should be correct on its own. Sorry for taking so long to get back to this. I'd squash all the commits to the single one, and force push. @tcharding would that work for you? Yep, that is fine for me. @tcharding I've squashed all the patches into single one. Thanks for the effort @michalkucharczyk. I started on the same before seeing this :see_no_evil: here is the issue that should be referenced: https://github.com/rust-bitcoin/rust-bip39/issues/55 Thanks for the effort @michalkucharczyk. I started on the same before seeing this 🙈 here is the issue that should be referenced: #55 Description updated. Please see #59 too which re-enables some unit tests that accidentally never ran - could be relevant. I've check the tests (with feature) locally: all of them are green. Let's keep features vs feature fix in #59. Thanks man! @stevenroose thanks for review - all addressed. would appreciate your feedback again. what is wrong with enabling it? if we are in std it shall be propagated there, right? On Sat, 3 Feb 2024 at 00:05, Tobin C. Harding @.***> wrote: @.**** commented on this pull request. In Cargo.toml https://github.com/rust-bitcoin/rust-bip39/pull/57#discussion_r1476855814 : @@ -13,8 +13,9 @@ edition = "2018" [features] default = [ "std" ] -std = [ "unicode-normalization", "serde/std" ] +std = [ "alloc", "serde/std", "unicode-normalization/std" ] Yes but we do not need the unicode-normalization "std" feature so why enable it? Also "unicode-normalization" is enabled already in "alloc". — Reply to this email directly, view it on GitHub https://github.com/rust-bitcoin/rust-bip39/pull/57#discussion_r1476855814, or unsubscribe https://github.com/notifications/unsubscribe-auth/AANF4TUBGDBRCP4AP3GGFGLYRVWNBAVCNFSM6AAAAAA62PCI2KVHI2DSMVQWIX3LMV43YUDVNRWFEZLROVSXG5CSMV3GSZLXHMYTQNRQGYYDCMBUGU . You are receiving this because you were mentioned.Message ID: @.***> -- Parity Technologies is a limited company registered in England and Wales with registered number 09760015 and registered office at c/o Ignition Law, 1 Sans Walk, London, England, EC1R 0LT. This message is intended solely for the addressee(s) and may contain confidential information. If you have received this message in error, please notify us, and immediately and permanently delete it. Do not use, copy or disclose the information contained in this message or in any attachment. For information about how we process data and monitor communications please see our Privacy policy (https://www.parity.io/privacy/ https://www.parity.io/privacy/)and for terms of use please see our Terms of Use policy (https://www.parity.io/terms/ https://www.parity.io/terms/). what is wrong with enabling it? if we are in std it shall be propagated there, right? I think we are talking about "unicode-normalization" from "std", right? If its an optional dependency then it should be optional, if its enabled in "std" its not really optional. I wanted it to be optional in no-std. Originally it was always enabled in std, so I kept it enabled (with default features on, which is std) to be backward compatible. On Sat, 3 Feb 2024 at 00:20, Tobin C. Harding @.***> wrote: what is wrong with enabling it? if we are in std it shall be propagated there, right? I think we are talking about "unicode-normalization" from "std", right? If its an optional dependency then it should be optional, if its enabled in "std" its not really optional. — Reply to this email directly, view it on GitHub https://github.com/rust-bitcoin/rust-bip39/pull/57#issuecomment-1924895359, or unsubscribe https://github.com/notifications/unsubscribe-auth/AANF4TVZI5QA45C3J4TXSRTYRVYETAVCNFSM6AAAAAA62PCI2KVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTSMRUHA4TKMZVHE . You are receiving this because you were mentioned.Message ID: @.***> -- Parity Technologies is a limited company registered in England and Wales with registered number 09760015 and registered office at c/o Ignition Law, 1 Sans Walk, London, England, EC1R 0LT. This message is intended solely for the addressee(s) and may contain confidential information. If you have received this message in error, please notify us, and immediately and permanently delete it. Do not use, copy or disclose the information contained in this message or in any attachment. For information about how we process data and monitor communications please see our Privacy policy (https://www.parity.io/privacy/ https://www.parity.io/privacy/)and for terms of use please see our Terms of Use policy (https://www.parity.io/terms/ https://www.parity.io/terms/). Oh I see. I'll ack this as is then and we can look at the features more closely later. Thanks for explaining. Is this failure something I should worry about? (btw: I see it is also failing for other commits). Can you rebase on https://github.com/rust-bitcoin/rust-bip39/pull/66/files so that the MSRV build works? Thanks. Can you rebase on master so that the MSRV build works? Thanks. Done (force pushed). @stevenroose can you kick CI please. On #64 as well. any updates here?
gharchive/pull-request
2023-11-02T08:14:47
2025-04-01T06:45:40.893443
{ "authors": [ "benma", "michalkucharczyk", "stevenroose", "tcharding" ], "repo": "rust-bitcoin/rust-bip39", "url": "https://github.com/rust-bitcoin/rust-bip39/pull/57", "license": "CC0-1.0", "license_type": "permissive", "license_source": "github-api" }
1847336282
Introduce the small-hash feature for bitcoin_hashes When enabled this feature swaps the hash implementation of sha512, sha256 and ripemd160 for a smaller (but also slower) one. On embedded processors (Cortex-M4) it can lead to up to a 52% size reduction, from around 37KiB for just the process_block methods of the three hash functions to 17.8KiB. The following numbers were collected on aarch64-unknown-linux-gnu with cargo 1.72.0-nightly. Original RUSTFLAGS='--cfg=bench -C opt-level=z' cargo bench test hash160::benches::hash160_10 ... bench: 33 ns/iter (+/- 1) = 303 MB/s test hash160::benches::hash160_1k ... bench: 2,953 ns/iter (+/- 187) = 346 MB/s test hash160::benches::hash160_64k ... bench: 188,480 ns/iter (+/- 11,595) = 347 MB/s test hmac::benches::hmac_sha256_10 ... bench: 33 ns/iter (+/- 2) = 303 MB/s test hmac::benches::hmac_sha256_1k ... bench: 2,957 ns/iter (+/- 104) = 346 MB/s test hmac::benches::hmac_sha256_64k ... bench: 192,022 ns/iter (+/- 6,407) = 341 MB/s test ripemd160::benches::ripemd160_10 ... bench: 25 ns/iter (+/- 1) = 400 MB/s test ripemd160::benches::ripemd160_1k ... bench: 2,288 ns/iter (+/- 93) = 447 MB/s test ripemd160::benches::ripemd160_64k ... bench: 146,823 ns/iter (+/- 1,102) = 446 MB/s test sha1::benches::sha1_10 ... bench: 41 ns/iter (+/- 0) = 243 MB/s test sha1::benches::sha1_1k ... bench: 3,844 ns/iter (+/- 70) = 266 MB/s test sha1::benches::sha1_64k ... bench: 245,854 ns/iter (+/- 10,158) = 266 MB/s test sha256::benches::sha256_10 ... bench: 35 ns/iter (+/- 0) = 285 MB/s test sha256::benches::sha256_1k ... bench: 3,063 ns/iter (+/- 15) = 334 MB/s test sha256::benches::sha256_64k ... bench: 195,729 ns/iter (+/- 2,880) = 334 MB/s test sha256d::benches::sha256d_10 ... bench: 34 ns/iter (+/- 1) = 294 MB/s test sha256d::benches::sha256d_1k ... bench: 3,071 ns/iter (+/- 107) = 333 MB/s test sha256d::benches::sha256d_64k ... bench: 188,614 ns/iter (+/- 8,101) = 347 MB/s test sha512::benches::sha512_10 ... bench: 21 ns/iter (+/- 0) = 476 MB/s test sha512::benches::sha512_1k ... bench: 1,714 ns/iter (+/- 36) = 597 MB/s test sha512::benches::sha512_64k ... bench: 110,084 ns/iter (+/- 3,637) = 595 MB/s test sha512_256::benches::sha512_256_10 ... bench: 22 ns/iter (+/- 1) = 454 MB/s test sha512_256::benches::sha512_256_1k ... bench: 1,822 ns/iter (+/- 70) = 562 MB/s test sha512_256::benches::sha512_256_64k ... bench: 116,231 ns/iter (+/- 4,745) = 563 MB/s test siphash24::benches::siphash24_1ki ... bench: 1,072 ns/iter (+/- 41) = 955 MB/s test siphash24::benches::siphash24_1ki_hash ... bench: 1,102 ns/iter (+/- 42) = 929 MB/s test siphash24::benches::siphash24_1ki_hash_u64 ... bench: 1,064 ns/iter (+/- 41) = 962 MB/s test siphash24::benches::siphash24_64ki ... bench: 69,957 ns/iter (+/- 2,712) = 936 MB/ 0000000000005872 t _ZN84_$LT$bitcoin_hashes..ripemd160..HashEngine$u20$as$u20$bitcoin_hashes..HashEngine$GT$5input17hc4800746a9da7ff4E 0000000000007956 t _ZN81_$LT$bitcoin_hashes..sha256..HashEngine$u20$as$u20$bitcoin_hashes..HashEngine$GT$5input17hf49345f65130ce9bE 0000000000008024 t _ZN14bitcoin_hashes6sha2568Midstate10const_hash17h57317bc8012004b4E.llvm.441255102889972912 0000000000010528 t _ZN81_$LT$bitcoin_hashes..sha512..HashEngine$u20$as$u20$bitcoin_hashes..HashEngine$GT$5input17h9bc868d4392bd9acE Total size: 32380 bytes With small-hash enabled RUSTFLAGS='--cfg=bench -C opt-level=z' cargo bench --features small-hash test hash160::benches::hash160_10 ... bench: 52 ns/iter (+/- 3) = 192 MB/s test hash160::benches::hash160_1k ... bench: 4,817 ns/iter (+/- 286) = 212 MB/s test hash160::benches::hash160_64k ... bench: 319,572 ns/iter (+/- 11,031) = 205 MB/s test hmac::benches::hmac_sha256_10 ... bench: 54 ns/iter (+/- 2) = 185 MB/s test hmac::benches::hmac_sha256_1k ... bench: 4,846 ns/iter (+/- 204) = 211 MB/s test hmac::benches::hmac_sha256_64k ... bench: 319,114 ns/iter (+/- 4,451) = 205 MB/s test ripemd160::benches::ripemd160_10 ... bench: 27 ns/iter (+/- 0) = 370 MB/s test ripemd160::benches::ripemd160_1k ... bench: 2,358 ns/iter (+/- 150) = 434 MB/s test ripemd160::benches::ripemd160_64k ... bench: 154,573 ns/iter (+/- 3,954) = 423 MB/s test sha1::benches::sha1_10 ... bench: 41 ns/iter (+/- 1) = 243 MB/s test sha1::benches::sha1_1k ... bench: 3,700 ns/iter (+/- 243) = 276 MB/s test sha1::benches::sha1_64k ... bench: 231,039 ns/iter (+/- 13,989) = 283 MB/s test sha256::benches::sha256_10 ... bench: 51 ns/iter (+/- 3) = 196 MB/s test sha256::benches::sha256_1k ... bench: 4,823 ns/iter (+/- 182) = 212 MB/s test sha256::benches::sha256_64k ... bench: 299,960 ns/iter (+/- 17,545) = 218 MB/s test sha256d::benches::sha256d_10 ... bench: 52 ns/iter (+/- 2) = 192 MB/s test sha256d::benches::sha256d_1k ... bench: 4,827 ns/iter (+/- 323) = 212 MB/s test sha256d::benches::sha256d_64k ... bench: 302,844 ns/iter (+/- 15,796) = 216 MB/s test sha512::benches::sha512_10 ... bench: 34 ns/iter (+/- 1) = 294 MB/s test sha512::benches::sha512_1k ... bench: 3,002 ns/iter (+/- 123) = 341 MB/s test sha512::benches::sha512_64k ... bench: 189,767 ns/iter (+/- 10,396) = 345 MB/s test sha512_256::benches::sha512_256_10 ... bench: 34 ns/iter (+/- 1) = 294 MB/s test sha512_256::benches::sha512_256_1k ... bench: 2,996 ns/iter (+/- 198) = 341 MB/s test sha512_256::benches::sha512_256_64k ... bench: 192,024 ns/iter (+/- 8,181) = 341 MB/s test siphash24::benches::siphash24_1ki ... bench: 1,081 ns/iter (+/- 65) = 947 MB/s test siphash24::benches::siphash24_1ki_hash ... bench: 1,083 ns/iter (+/- 63) = 945 MB/s test siphash24::benches::siphash24_1ki_hash_u64 ... bench: 1,084 ns/iter (+/- 63) = 944 MB/s test siphash24::benches::siphash24_64ki ... bench: 67,237 ns/iter (+/- 4,185) = 974 MB/s 0000000000005384 t _ZN81_$LT$bitcoin_hashes..sha256..HashEngine$u20$as$u20$bitcoin_hashes..HashEngine$GT$5input17hae341658cf9b880bE 0000000000005608 t _ZN14bitcoin_hashes9ripemd16010HashEngine13process_block17h3276b13f1e9feef8E.llvm.13618235596061801146 0000000000005616 t _ZN14bitcoin_hashes6sha2568Midstate10const_hash17h3e6fbef64c15ee00E.llvm.7326223909590351031 0000000000005944 t _ZN81_$LT$bitcoin_hashes..sha512..HashEngine$u20$as$u20$bitcoin_hashes..HashEngine$GT$5input17h321a237bfbe5c0bbE Total size: 22552 bytes Conclusion On aarch64 there's overall a ~30% improvement in size, although ripemd160 doesn't really shrink that much (and its performance also aren't impacted much with only a 6% slowdown). sha512 and sha256 instead are almost 40% slower with small-hash enabled. I don't have performance numbers for other architectures, but in terms of size there was an even larger improvements on thumbv7em-none-eabihf, with a 52% size reduction overall: Size Crate Name 25.3KiB bitcoin_hashes <bitcoin_hashes[fe467ef2aa3a1470]::sha512::HashEngine as bitcoin_hashes[fe467ef2aa3a1470]::HashEngine>::input 6.9KiB bitcoin_hashes <bitcoin_hashes[fe467ef2aa3a1470]::sha256::HashEngine as bitcoin_hashes[fe467ef2aa3a1470]::HashEngine>::input 4.8KiB bitcoin_hashes <bitcoin_hashes[fe467ef2aa3a1470]::ripemd160::HashEngine as bitcoin_hashes[fe467ef2aa3a1470]::HashEngine>::input vs Size Crate Name 9.5KiB bitcoin_hashes <bitcoin_hashes[974bb476ef905797]::sha512::HashEngine as bitcoin_hashes[974bb476ef905797]::HashEngine>::input 4.5KiB bitcoin_hashes <bitcoin_hashes[974bb476ef905797]::ripemd160::HashEngine>::process_block 3.8KiB bitcoin_hashes <bitcoin_hashes[974bb476ef905797]::sha256::HashEngine as bitcoin_hashes[974bb476ef905797]::HashEngine>::input I'm assuming this is because on more limited architectures the compiler needs to use more instructions to move data in and out of registers (especially for sha512 which ideally would benefit from 64-bit registers), so reusing the code by moving it into functions saves a lot of those instructions. Also note that the const_hash method on sha256 causes the compiler to emit two independent implementations. I haven't looked into the code yet, maybe there's a way to merge them so that the non-const process_block calls into the const fn. Note: commits are unverified right now because I don't have the keys available, I will sign them after addressing the review comments. Can you edit clippy.toml to set too-many-arguments-threshold to 9 (or whatever value you need). Overall concept ACK. I'm super impressed at how small this diff is. I was a little apprehensive about reviewing this PR but it's much smaller than I expected. Just thinking aloud. I wonder if this should be a Cargo feature. Feels kind of weird ... as this doesn't really enable any "feature", and is more like a compilation level thing. I wonder if there are any idiomatic alternatives. Feel to me like this would be best controlled by env var during compilation. Or maybe I'm just overcomplicating. I'm super impressed at how small this diff is. I was a little apprehensive about reviewing this PR but it's much smaller than I expected. Yeah one of my main goals was to keep this very easy to review (and also for me to write, since I don't have much experience writing and optimizing code like this). I'm sure we could squeeze out a lot more space/performance with more effort, but this is probably a good trade-off between code readability and space/performance imho. @dpc regarding the cargo feature thing: sometimes features are used in this way, for example secp has a lowmemory feature that doesn't really add anything to the library but it makes it use a smaller precomputed table to save memory. Env variables would be a bit annoying to use in my opinion, first of all you would have to document them (where?), while with features one can simply look at the cargo.toml and find them. Also it would force downstream projects to either have custom scripts to compile, or write a build.rs file that sets these env variables automatically. And it would make it impossible to include two versions of the library with different features (although one might argue that in general nobody would really want to do that..). I agree that it's a little weird to use a cargo feature, but also think it's the right choice, for the same reasons as the rust-secp lowmemory feature. That is, it has zero effect on functionality, and if some dep wants this on, it doesn't (really) make sense for any other dep to be able to turn it off. @tcharding I suspect that constfns are just as fast, but the reason that they (can be) smaller is that macros are always inlined whereas constfns might not be. Inlining is faster because it avoids function call overhead. Also +1 to custom formatting. So I tried to read the dragon book a couple of times before and never made it past the first chapters. Anyone got an suggestions on books/sites to learn about compiler optimisations without making my eyes bleed? With green CI I'm happy to ack this. Its basically just swapping macros for const functions, it did take me a while to work that out though :) Do we want this in the upcoming hashes release? I pushed a new version with all the feedback received so far: I didn't notice any performance regression by switching to const fns in the "fast hash" implementation. Even when compiling with z (size) optimization the numbers where still very close to what they were originally. I also manually formatted the functions with many arguments, I tried to break them up into a few lines and I think they look pretty good now. @tcharding just a sidenote: I don't know much about compilers and optimizations either, the idea behind this PR was just to reuse common code as much as possible rather than expanding the full hash function, which ends up being huge. I think the implementation you have here is a pretty good tradeoff between speed and size: it could be made much faster by writing critical sections in assembly manually, or even using SIMD instructions on supported platforms. It could also be made much smaller by once again playing with assembly manually, but that's pretty complicated and this library isn't trying to be either a fast or super small hash functions library, so I think this is more than enough. This is all to say that somebody who really knows these stuff could definitely make big improvements, but they would all have a cost in terms of code readability/simplicity, so I'm not even sure it's worth for this lib. Do we want this in the upcoming hashes release? Yeah, let's do it. Nice! I suspect round could also be a fn but since it can't be a constfn I am less confident that the compiler will inline it. Maybe we could add cfg_attr(feature = "small_hashes", inline(never)) on it (and a corresponding inline(always) in the other case)? But in the interest of not making you do a ton of iterations on this, I'm happy to ACK this as is. But if you want to try this, that'd be great :).
gharchive/pull-request
2023-08-11T19:43:08
2025-04-01T06:45:40.909485
{ "authors": [ "afilini", "apoelstra", "dpc", "tcharding" ], "repo": "rust-bitcoin/rust-bitcoin", "url": "https://github.com/rust-bitcoin/rust-bitcoin/pull/1990", "license": "CC0-1.0", "license_type": "permissive", "license_source": "github-api" }
693296927
Performance regression in 0.2.5 Hi @vadixidav, Sorry for the late notification, I recently made an update of bitarray from 0.2.3 to 0.2.5 and noticed a pretty big performance issue. I have a test in my project (which is pretty much https://github.com/rust-cv/cv/blob/5e4754050bef9d884f8c234072044febfb9a8791/akaze/tests/estimate_pose.rs with a timer) that I run from time to time to check for performance regressions/enhancement when upgrading. It's around 200ms difference, but for my project since I do akaze matching in parallel, it ends up with around 2s more (about twice the nominal time). I confirmed it's related to 0.2.5, and looking at the commits, I saw that this is most probably due to "512-bit SIMD intrinsic" removal. Can you provide some background on it, and is there any future plan regarding this? If you're willing to keep it I could work on specific feature flag for this. Thanks! @killzoner Yes, we could add in a feature flag to preserve this. However, we need to be careful to avoid breaking the code on computers that don't support it. Some people were reporting issues with the LLVM intrinsics on their computers (we don't know what causes the issue). This change is entirely internal, so feel free to create a PR for this. We can have @codec-abc test that it continues to work without the flag, as they have encountered issues building with this in the code. The feature should be disabled by default. You can name it whatever you want, so long as it refers to llvm intrinsics. One alternative that might solve this without the intrinsics is if you can somehow get the autovectorizor to generate the same code as using the LLVM intrinsics does, but I was unable to do that after trying for several hours in godbolt. I would recommend just adding the intrinsics back in behind a feature gate. Also, thank you for letting me know about this issue. I was aware of the regression, but I did not know exactly what effects would occur by changing that. Feel free to reach out if you have additional issues, and I would love to get a PR for this feature. @killzoner Hey, I am going to close this issue in favor of #1, as that was created to track this problem. @killzoner Just wanted to let you know that the feature unstable-512-bit-simd was added and published in version 0.2.6 as part of issue #1. @vadixidav just saw that, thanks. I will take a look at 0.2.6. Out of curiosity, is there any alternative approach to brute force matching with rust-cv ecosystem? Also I noticed that plans for rust 1.47 include an upgrade to llvm 11, I will have a look at the features to see if it brings anything new regarding this
gharchive/issue
2020-09-04T15:25:30
2025-04-01T06:45:40.917008
{ "authors": [ "killzoner", "vadixidav" ], "repo": "rust-cv/bitarray", "url": "https://github.com/rust-cv/bitarray/issues/2", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
718877822
llvm-objdump: Use two hyphens in flags to objdump LLVM 11 changed the behavior of these tools. See https://github.com/rust-embedded/book/issues/269 Thanks for the pull request, and welcome! The Rust team is excited to review your changes, and you should hear from @andre-richter (or someone else) soon. If any changes to this PR are deemed necessary, please add them as extra commits. This ensures that the reviewer can see what has changed since they last reviewed the code. Due to the way GitHub handles out-of-date commits, this should also make it reasonably obvious what issues have or haven't been addressed. Large or tricky changes may require several passes of review and changes. Please see the contribution instructions for more information. Ouch. This means we would need different instructions depending on the Rust version. IIRC double hyphen format has been supported for a while in addition to single hyphen. At least that's what I had observed when testing https://github.com/rust-embedded/cargo-binutils/pull/92
gharchive/pull-request
2020-10-11T16:51:00
2025-04-01T06:45:40.923108
{ "authors": [ "adhoore", "eldruin", "rust-highfive", "therealprof" ], "repo": "rust-embedded/book", "url": "https://github.com/rust-embedded/book/pull/270", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
454524593
Add monotron As discussed in https://github.com/thejpster/monotron/issues/65 I propose to add Monotron to the embedded rust showcase. I think this project is needed in the showcase as one of the first embedded rust project. @thejpster can you validate the content of the entry? Hard requirement: run on a Texas Instruments TM4C123 microcontroller mostly rust, the base of the project is rust Apache2 or MIT licensed, project on github Bonus points: Crazy things as VGA without dedicated hardware Instructions to build yours is available travis do a cargo build nightly only project (for ASM) "a massive race hazard and full of undefined behaviour" no unit test as far as I know some external useful crates have been created for this project as, for example, embedded-sdmmc Penalties "a massive race hazard and full of undefined behaviour" Looks like there is quite a lot of unwrap. r? @korken89 (rust_highfive has picked a reviewer for you, use r? to override) The information provided is correct. @TeXitoi could you please take a look at https://travis-ci.org/rust-embedded/showcase/builds/544364296?utm_source=github_status&utm_medium=notification ? Looks like a timeout, I'll try again in case it was spurious bors retry Nevermind, thanks again @TeXitoi and @thejpster!
gharchive/pull-request
2019-06-11T07:31:12
2025-04-01T06:45:40.928997
{ "authors": [ "TeXitoi", "jamesmunns", "rust-highfive", "thejpster" ], "repo": "rust-embedded/showcase", "url": "https://github.com/rust-embedded/showcase/pull/18", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
615481539
Silence unused import warning Remove the wildcard use statement inserted by generate-tests.sh that currently generates an unused import warning. bors r+
gharchive/pull-request
2020-05-10T21:36:22
2025-04-01T06:45:40.930381
{ "authors": [ "gkelly", "therealprof" ], "repo": "rust-embedded/svd", "url": "https://github.com/rust-embedded/svd/pull/121", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1271948361
0.24.0 generates wrong const-generic code We have a register containing a field like this <field> <dim>8</dim> <dimIncrement>0x1</dimIncrement> <dimIndex>0-7</dimIndex> <name>CH%s_TX_THR_EVENT_INT_CLR</name> <description>Set this bit to clear the rmt_ch%s_tx_thr_event_int_raw interrupt.</description> <bitOffset>24</bitOffset> <bitWidth>1</bitWidth> <access>write-only</access> </field> When generating code with 0.23.0 this code produces the expected output: let mut data = 0u32; let data_ptr = &mut data as *mut _ as *mut u32; unsafe { &*(data_ptr as *mut generated23::generic::Reg<generated23::rmt::int_clr::INT_CLR_SPEC>) } .write(|w| unsafe { w.ch_tx_thr_event_int_clr(7).set_bit() }); println!("With svd2rust 0.23.0: {:032b}", data); The output is With svd2rust 0.23.0: 10000000000000000000000000000000 However, generating code from the same SVD with 0.24.0 the following code let mut data = 0u32; let data_ptr = &mut data as *mut _ as *mut u32; unsafe { &*(data_ptr as *mut generated24::generic::Reg<generated24::rmt::int_clr::INT_CLR_SPEC>) } .write(|w| unsafe { w.ch_tx_thr_event_int_clr::<7>().set_bit() }); println!("With svd2rust 0.24.0: {:032b}", data); outputs 00000000000000000000000010000000 Apparently, it's ignoring the bitOffset of the field. I created a repo to reproduce this: https://github.com/bjoernQ/svd2rust_test Maybe I'm doing something obvious wrong but it looks like a bug to me. Yes. Those functions are not equivalent. And I'm really surprised someone already used one of them. const-generic version just takes bitOffset, not field number. In 0.23 those function were both present in generated code. Numeric version always, const-generic under feature. I've deleted first one during refactoring. If you need it I could try to return it back, but it will take time. Thanks for the quick reply. I will look into our code and we can most probably work around it. The behavior of the const-generic version is at least surprising and we started using them when we upgraded to 0.24.0 and found the numeric version removed - we just assumed it's equivalent No idea if this will cause similar confusion for others and if it would be better to re-add the old functionality
gharchive/issue
2022-06-15T09:31:26
2025-04-01T06:45:40.936103
{ "authors": [ "bjoernQ", "burrbull" ], "repo": "rust-embedded/svd2rust", "url": "https://github.com/rust-embedded/svd2rust/issues/616", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
501466274
[WIP] Newsletter #2: Extended draft and basic structure This PR is a total WIP atm, posting it just to show that the work is getting done (though slowly). Sorry, I started working on the draft late this time. I was planning to prepare an extended draft during 20ths of September but unexpectedly got totally swamped. Ideally, this newsletter should be released on 1 October but I hope it's ok if it will be released a few days later this time. Looking good so far 😄 Take your time - I think maybe once this issue is out we should raise an issue to discuss how we can get more people submitting their own stuff/news that interests them to the newsletter. Feel bad that all of the work is falling on you at the minute! I'd love to do the section on rx, but probably best to wait for this to be merged first. @cloudhead I'm not sure how long finishing and merging this will take, so I will be happy if you make a PR to this branch. :) I would love to help with the Iced section too! I can open a PR with a short description. That’s a hella lot of work. Congrats! 🎉 @ozkriff https://github.com/ozkriff/rust-gamedev.github.io/pull/2 :+1: The PR is finally ready for review! r? everyone If everything is ok, this newsletter will be merged and published in a few hours (~ 16:00 UTC?). Found a typo at the end The source code is aviable on GitHub @kvark I intentionally intermixed games and libraries but I guess I'm ok with adding more structure to the newsletter. I've got a few questions though: What comes first - libraries or games? Should the Amethyst section be split into Library & Tooling updates/Amethyst and Game updates/Amethyst Games? Amethyst's logo, I guess, stays in the Library & Tooling part. Should the "Other News" section be split into Library & Tooling updates/Other News and Game updates/Other News? Games come first. They're what we're ultimately trying to promote. Libraries only exist in service to games. @kvark @Lokathor @17cupsofcoffee restructured and resorted as proposed. :thinking: r? I guess I should wait for a few more review approvals before merging after all this section moves. Thanks for the ton of work you did on this one New structure looks good to me :)
gharchive/pull-request
2019-10-02T12:45:17
2025-04-01T06:45:40.944106
{ "authors": [ "17cupsofcoffee", "AlexEne", "Lokathor", "cloudhead", "hecrj", "nico-abram", "ozkriff", "phaazon" ], "repo": "rust-gamedev/rust-gamedev.github.io", "url": "https://github.com/rust-gamedev/rust-gamedev.github.io/pull/25", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
304622791
Clarify non-nullable pointer optimization in repr(C) section Resolves #59 Looks good, just need to double-check that Box is Actually FFI-Safe, as this is quite surprising to hear Thanks for the review! The Nullable Pointer Optimization subsection under FFI calls out boxes since they can't hold null pointers. This makes it seem like they should be FFI-safe since the representation in memory would be the same as for a raw pointer (iiuc). I'm far from an authority on this, so please correct me if I'm wrong! One potential distinction I see between Box<T> and the other listed types is that since a Box holds Rust-owned data (which is freed using Rust's allocator), it's not suitable to receive pointers from non-Rust code. However, it should be able to provide pointers from Rust to non-Rust code, right? There are subtle annoying details about ABI here. Specifically, pointers are a specific kind of thing in ABIs, and a struct containing a pointer does not have the same one. I expect we currently do match the ABI, but I'm uncertain that we guarantee it. (why would we?) Discussing with @eddyb, we agree we could guarantee the ABI of Box but don't have a strong motivation to do so. All we guarantee about Box is that Option<Box> has the same layout as Box. Makes sense. I can't see any motivation for guaranteeing that ABI either. Updated the PR.
gharchive/pull-request
2018-03-13T04:17:42
2025-04-01T06:45:40.965730
{ "authors": [ "Gankro", "ramosbugs" ], "repo": "rust-lang-nursery/nomicon", "url": "https://github.com/rust-lang-nursery/nomicon/pull/60", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
356040152
Latest docs.rs documentation isn't working The link https://docs.rs/packed_simd/ leads to a 404 error. docs.rs fails to build the crate, because it uses repr(transparent), which has been stabilised in 1.29, but their build system is still using Rust 1.28 Similar docs.rs issue: https://github.com/onur/docs.rs/issues/219 It's unfortunate but the main docs.rs developer hasn't been active in a while so it's unknown how long it will take for them to update their compiler. Yeah, sadly there is nothing much that we can do about this until the docs.rs issue is resolved. In the mean time, the documentation generated from the master branch is available here and updated on every commit: https://rust-lang-nursery.github.io/packed_simd/packed_simd/ docs.rs looks to be working as expected now. I believe this should be closed, and the comment on the readme removed?
gharchive/issue
2018-08-31T16:06:37
2025-04-01T06:45:40.971121
{ "authors": [ "GabrielMajeri", "gnzlbg", "tafia" ], "repo": "rust-lang-nursery/packed_simd", "url": "https://github.com/rust-lang-nursery/packed_simd/issues/110", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
261590796
std::os::raw::c_void used as function return type Hello, thanks for the library. It is said in the documentation Do not use it (i.e. std::os::raw::c_void) as a return type for FFI functions which have the void return type in C However, generated binding does use std::os::raw::c_void as return type. Input C/C++ Header #ifndef POINT_H #define POINT_H struct Point { float x; float y; }; void *point_info(const struct Point *p, char *str); #endif Bindgen Invocation bindgen::Builder::default() .header("point.h") .generate() .unwrap() Actual Results extern "C" { pub fn point_info(p: *const Point, str: *const ::std::os::raw::c_char) -> *mut ::std::os::raw::c_void; } Expected Results I expect no return type for Rust function or (). extern "C" { pub fn point_info(p: *const Point, str: *const ::std::os::raw::c_char); } I use bindgen 0.30.0. Is it really an issue or I am missing something? Thank you. Oh, sorry, I edited the code few time and didn't noticed that I am returning void * not plain void. With void point_info(const struct Point *p, char *str); everything is fine.
gharchive/issue
2017-09-29T09:55:28
2025-04-01T06:45:40.991201
{ "authors": [ "ivanovaleksey" ], "repo": "rust-lang-nursery/rust-bindgen", "url": "https://github.com/rust-lang-nursery/rust-bindgen/issues/1047", "license": "BSD-3-Clause", "license_type": "permissive", "license_source": "github-api" }
296744716
created compiletest.md describe the steps required to add a test and a header command to compiletest All feedback addressed (except link to PR#47). LGTM 👍 I think we should wait for #47 unless you are under time constraints because that will make it easier to link to the other chapter. Thanks! Agreed. All cool by me! And thanks for the proof-read! @U007D #47 has landed 🎉 A few comments Perhaps we can move this chapter to be a subchapter in the SUMMARY? Perhaps we can add links to this chapter in https://github.com/rust-lang-nursery/rustc-guide/blob/master/src/tests/adding.md and https://github.com/rust-lang-nursery/rustc-guide/blob/master/src/tests/intro.md Could you update this chapter to link to those chapters? Thanks! Great, glad to hear it! Re: edits: Sure. I'll see what I can do to make the edits over the next few days, and will let you know once they're done. -------- Original Message -------- On February 17, 2018 8:55 AM, Who? Me?! notifications@github.com wrote: @U007D #47 has landed 🎉 A few comments Perhaps we can move this chapter to be a subchapter in the SUMMARY? Perhaps we can add links to this chapter in https://github.com/rust-lang-nursery/rustc-guide/blob/master/src/tests/adding.md and https://github.com/rust-lang-nursery/rustc-guide/blob/master/src/tests/intro.md Could you update this chapter to link to those chapters? Thanks! — You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub, or mute the thread. https://github.com/rust-lang-nursery/rustc-guide/pull/61 Also improved some language under Adding a new header command. Thanks!
gharchive/pull-request
2018-02-13T13:53:51
2025-04-01T06:45:41.008897
{ "authors": [ "U007D", "mark-i-m" ], "repo": "rust-lang-nursery/rustc-guide", "url": "https://github.com/rust-lang-nursery/rustc-guide/pull/53", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
57429965
Cargo build with dependencies fails on Cygwin referring to git-core/templates After trying cargo build --verbose on a project in Cygwin with a single dependency, this is the output: ± cargo build --verbose Updating registry `https://github.com/rust-lang/crates.io-index` Unable to update registry https://github.com/rust-lang/crates.io-index Caused by: [2] Could not find '/cygdrive/c/cygwin64/usr/share/git-core/templates/' to stat: The system cannot find the path specified. /cygdrive/c/cygwin64/usr/share/git-core/templates/ does exist, since I went there and saw stuff in it. Perhaps this has something to do with cygwin pathing issues? The Cargo is a windows 64 bit build. Which I suspect means that Cargo is taking the cygwin PATH of /cygdrive/c/cygwin64/usr/share/git-core/templates/ and trying to access it, which would not be possible through a windows executable. My git is not a windows based git, is supplied from Cygwin's git. which git returns /usr/bin/git. It can be fixed by utilising something like cygpath in order to convert unix style paths into windows style paths: ± cygpath -w `git config --get init.templatedir` C:\cygwin64\usr\share\git-core\templates But of course you don't want to use cygpath executable. Perhaps in the Windows version of Cargo you can detect whether the path looks like a cygwin style PATH, and convert it directly. All Cygwin paths always start with /cygdrive/. Cargo version: cargo 0.0.1-pre-nightly (9404539 2015-02-09 20:54:26 +0000) It seems to fail even when I try to explicitly change the init.templatedir on the local git config. I was using git config --local init.templatedir C:\cygwin64\usr\share\git-core\templates Which changed the local config to point to the windows style path. But running cargo build still resulted in the same error. It seems to fail even when I try to explicitly change the init.templatedir on the local git config. I was using git config --local init.templatedir C:\cygwin64\usr\share\git-core\templates Which changed the local config to point to the windows style path. But running cargo build still resulted in the same error. Also tried setting the $GIT_TEMPLATE_DIR environment variable to a windows compatible path. I have set it on both Windows native environment variables and Cygwin environment variables, the libgit2 in Cargo doesn't seem to look up the $GIT_TEMPLATE_DIR. See: http://git-scm.com/docs/git-init#_template_directory The environment variable should have higher precedence than any configuration. Fixed this by doing a global git config: git config --global init.templatedir C:\cygwin64\usr\share\git-core\templates So therefore my conclusions are, why is Cargo's libgit2 not respecting --local config nor is it respecting the $GIT_TEMPLATE_DIR? I think that dealing with cygwin-related paths and such is not necessarily under Cargo's or libgit2's purview. I think that you'll get much more mileage just using windows paths everywhere unless you know for a fact that the tool is controlled by cygwin. For dealing with GIT_TEMPLATE_DIR you may also want to open up an issue against libgit2 as that's probably the best place to handle it (they handle other env vars in other places at least). The --local issue may be our fault, can you elaborate more on that? (e.g. what the expected behavior is and what was actually found) I put an issue on libgit2 https://github.com/libgit2/libgit2/issues/2910 With regards to the --local issue. Basically if I run: git config --global init.templatedir C:/cygwin64/usr/share/git-core/templates Cargo build works. If I run instead: git config --local init.templatedir C:/cygwin64/usr/share/git-core/templates Cargo build doesn't work. This means Cargo build's libgit2 is not taking into account the locally configured path only the global path. There's also a thing called --system path, but I have not tried that. The git template directory according to the git documentation says that it should be queried in this order: GIT_TEMPLATE_DIR, local config, global config, system config. So the behavior you're looking for is that the local init.templatedir will affect the global checkouts of git dependencies? I see what you mean, I was unaware of git clone being considered a global command. It's just that it kind of makes sense if you start a repo, change the init.templatedir for that current repo, and then run git clone or whatever libgit2 is doing in order to bring in dependencies for that particular project's repo. Ah what I mean is that the checkout of git repositories is a global operation because all Cargo projects share the same cache for git dependencies. I would expect a clone of a submodule, for example, to use the local init.templatedir but not the global "check out this dependency" operation. Ok, so we are in agreement that the local init.templatedir should be used right? Anyway the guy from libgit2 answered: https://github.com/libgit2/libgit2/issues/2910#issuecomment-76943275 He said libgit2 doesn't check environment variables. But can be passed configuration paths. Perhaps this means cargo should be checking them and passing the correct configuration path? (Specifically the local init.templatedir). I think I may not quite yet grapple what init.templatedir should be used for here, so maybe it would help to draft up a patch? If you're missing any options in git2-rs please let me know!
gharchive/issue
2015-02-12T08:17:33
2025-04-01T06:45:41.028969
{ "authors": [ "CMCDragonkai", "alexcrichton" ], "repo": "rust-lang/cargo", "url": "https://github.com/rust-lang/cargo/issues/1295", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
545332924
Unexpected behavior: cargo rustc -- --emit=llvm-ir Problem Description of bug Running cargo rustc -- --emit=llvm-ir does not produce output IR. Instead, a warning message "warning: ignoring emit path because multiple .ll files were produced" is printed out. I expected that running cargo rustc -- --emit=llvm-ir would produce an llvm-ir output file. Environments that I tested to reproduce problem Windows 10 OS, rustc 1.42.0-nightly (0de96d37f 2019-12-19), cargo 1.41.0-nightly (626f0f40e 2019-12-03) Ubuntu WSL, rustc 1.42.0-nightly (0de96d37f 2019-12-19), cargo 1.41.0-nightly (626f0f40e 2019-12-03) Steps Set up a minimal Rust project as the image above. Run cargo rustc -- --emit=llvm-ir Possible Solution(s) This is not a solution, but running rustc src/main.rs --emit=llvm-ir worked as I expected in both the environments that I tested; llvm-ir output was generated as a file. Notes Output of cargo version: cargo 1.41.0-nightly (626f0f4 2019-12-03) The .ll files should be in the target\debug\deps directory. (That's where all output goes.) You may want to turn off incremental compilation (in the profile, or env var), since that changes how rustc splits the code generation (causing multiple files). It is a known issue that incremental causes this: https://github.com/rust-lang/rust/issues/48147 The warning is a false-positive (https://github.com/rust-lang/rust/issues/49801), rustc just seems to be confused when there are multiple .ll files. Thank you for the clarification 👍
gharchive/issue
2020-01-04T21:10:51
2025-04-01T06:45:41.037388
{ "authors": [ "JOE1994", "ehuss" ], "repo": "rust-lang/cargo", "url": "https://github.com/rust-lang/cargo/issues/7765", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1951019469
ci: big ⚠️ to ensure the CNAME file is always there What does this PR try to resolve? A request from https://rust-lang.zulipchat.com/#narrow/stream/242791-t-infra/topic/doc.2Ecrates.2Eio.20CNAME.20missing The CNAME file is for GitHub to redirect requests to the custom domain. Missing this may entail security hazard and domain takeover. See https://docs.github.com/en/pages/configuring-a-custom-domain-for-your-github-pages-site/managing-a-custom-domain-for-your-github-pages-site#securing-your-custom-domain Thanks! 😄 @bors r+ :pushpin: Commit ec9c5b0b43c6066dd46a3fbfc6e12cc1aee23362 has been approved by ehuss It is now in the queue for this repository. :hourglass: Testing commit ec9c5b0b43c6066dd46a3fbfc6e12cc1aee23362 with merge 5225467af0cb07cd5aee5d7d5f7fcc64cb0b3c32... :sunny: Test successful - checks-actions Approved by: ehuss Pushing 5225467af0cb07cd5aee5d7d5f7fcc64cb0b3c32 to master...
gharchive/pull-request
2023-10-19T01:31:59
2025-04-01T06:45:41.041957
{ "authors": [ "bors", "ehuss", "weihanglo" ], "repo": "rust-lang/cargo", "url": "https://github.com/rust-lang/cargo/pull/12853", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
195667493
cargo doc: show where are docs Let's print the file we are trying to open. It's useful when we fail to detect the browser, because the user would be able to copy-paste url manually. r? @brson (rust_highfive has picked a reviewer for you, use r? to override) @bors: r+ :pushpin: Commit 6873389 has been approved by alexcrichton :hourglass: Testing commit 6873389 with merge 2adc601... :sunny: Test successful - status-appveyor, status-travis Approved by: alexcrichton Pushing 2adc60102d7827824055c689b64087e8d7181f97 to master...
gharchive/pull-request
2016-12-14T22:44:58
2025-04-01T06:45:41.045260
{ "authors": [ "alexcrichton", "bors", "matklad", "rust-highfive" ], "repo": "rust-lang/cargo", "url": "https://github.com/rust-lang/cargo/pull/3403", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1768754124
Unable to build crate in docs.rs due to a missing library, available through optional features Crate name cephalon Build failure link https://docs.rs/crate/cephalon/0.0.2/builds/843039 Additional details Hi, after searching around for a few hours, I pin pointed the issue to the libtorch library. The library is needed by a dependency tch-rs to be able run a sentence embedding model in rust. The thing is I am using an optional feature download-libtorch for tch, that should download the library if it is missing. However, on docs.rs, and installing via crate on crates.io it does not download the missing library. However, if I just use my local copy it seems to be working completely fine, and downloading the missing library. The library itself is around 4GB, so maybe that's why, but it doesn't make sense as to why when I try to install the crate onto my system it would not install it, unless there is a limit to downloading libraries with huge file size. I am using a windows 11, and Rust version 1.69. My friend tried it on his mac, and it is the same issue. Any help would be appreciated! I'm not sure why download-libtorch doesn't appear to be attempting to download the library, but if it did that would also fail because docs.rs does not provide network access. tch/libtorch-sys also have a doc-only feature that avoids depending on the native library at all, you will have to activate it only when building on docs.rs, something like: [features] doc-only = ["tch/doc-only"] [package.metadata.docs.rs] features = ["doc-only"] Ah, I see why it doesn't log the download, for annoying reasons the build log we record is actually the second build attempt, the torch-sys build script has some state that means if it fails to download once it will just skip attempting to download a second time, might be worth you opening an upstream issue to let them know: if !libtorch_dir.exists() { fs::create_dir(&libtorch_dir).unwrap_or_default(); ... download(&libtorch_url, &filename)?; ... } Hi, Thank you again for all the help! I really appreciate it! I was able to get the build done on docs.rs today! I will go ahead and create an upstream issue to help with the tch repo as well. Can you elaborate on the issue?? so I can make sure I understand it fully! Closing, the issue is fixed. Thanks again! :)
gharchive/issue
2023-06-22T01:14:22
2025-04-01T06:45:41.097439
{ "authors": [ "Nemo157", "sagarp-patel" ], "repo": "rust-lang/docs.rs", "url": "https://github.com/rust-lang/docs.rs/issues/2160", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2034650366
Add #[context()] for the function which has the return type impl Iterator<Item = xxx>, get error[E0562] When using Please indicate the precise nightly build you are using; try rustc --version # rustc --version rustc 1.74.0 (79e9716c9 2023-11-13) (Fedora 1.74.0-1.fc39) What I tried to do Please include sample code or links to your project! If add #[context()] for the function which has the return type impl Iterator<Item = xxx>, get error[E0562]: impl Trait only allowed in function and inherent method return types, not in closure return types. use fn_error_context::context; #[context("Scan all tmpfiles conf and save entries")] fn empty_iterators() -> impl Iterator<Item = String> { std::iter::empty::<_>() } fn main() { let _ = empty_iterators(); } What happened Describe what happened in as much detail as possible. For compiler ICEs, a good idea would be to include the output with RUST_BACKTRACE=1. When run above code, get error: error[E0562]: `impl Trait` only allowed in function and inherent method return types, not in closure return types --> src/main.rs:4:25 | 4 | fn empty_iterators() -> impl Iterator<Item = String> { | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ For more information about this error, try `rustc --explain E0562`. What I expected When remove #[context()] line, no error. But it should support. I think the tricky thing here is that due to the use of the fn-error-context crate it's not immediately obvious (to someone starting work on a codebase that uses the crate without being familiar with its implementation) that the macro rewrote the function to use an inner closure. Hmm, I wonder if it'd be possible to change that crate to use an inner fn instead of a closure...thinking about it briefly it seems likely. I'm not a maintainer for this repository but I suspect there's not much immediately actionable for the Rust trait maintainers. Ah, there's already an issue: https://github.com/andrewhickman/fn-error-context/issues/7 so I think we should probably discuss there. Thanks @cgwalters for the clarification, then I will close this issue and track in https://github.com/andrewhickman/fn-error-context/issues/7
gharchive/issue
2023-12-11T01:20:44
2025-04-01T06:45:41.103448
{ "authors": [ "HuijingHei", "cgwalters" ], "repo": "rust-lang/impl-trait-initiative", "url": "https://github.com/rust-lang/impl-trait-initiative/issues/18", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
509262188
nfnetlink constants Hello I'd like to ask, would it be OK to add netfilter netlink constants. They are in the linux headers (/usr/include/linux/netfilter/nfnetlink*.h). I'm trying to extend neli with nfnetlink support (currently NFLOG, https://github.com/jbaublitz/neli/pull/48). I'll send a pull request, I just wanted to ask first if including these would be OK. Sure. I assume this is done by #1628 and am going to close but feel free to re-open if I'm missing something. I think there are some few more constants in the header, but I guess a lazy approach (once they are actually needed) is good and there's not much use for keeping this issue open :-). Sorry for letting it rot in here.
gharchive/issue
2019-10-18T19:10:37
2025-04-01T06:45:41.106229
{ "authors": [ "JohnTitor", "gnzlbg", "vorner" ], "repo": "rust-lang/libc", "url": "https://github.com/rust-lang/libc/issues/1562", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
818231883
musl 1.2.x support Switch the CI to use musl 1.2.2 and fix any CI problems. r? @Amanieu (rust-highfive has picked a reviewer for you, use r? to override) Note that rust-lang/rust's CI also uses 1.1.24 currently (https://github.com/rust-lang/rust/blob/7e9a36fa8a4ec06daec581e23f390389e05f25e4/src/ci/docker/scripts/musl.sh#L28) and I think we should keep in sync with it. @JohnTitor it is a chicken + egg issue. we have to update both. i intend to update both. Yeah I just left a comment just in case. Switching to musl 1.2.x only works despite of https://github.com/rust-lang/libc/issues/1848 because musl doesn't support symbol versioning, so the linker allows linking against the time32 compat symbols without issue. I have no idea how to properly solve this. The switch to 1.2.x will also make rustup provided binaries incompatible with Void Linux (since we haven't updated yet), I think? Or well, potentially incompatible, if new functions start being used. The ancient glibc used for builds is also unsupported (I think?), but it's used because it allows all sorts of systems to use the "universal" binaries without issue. Because of that, I'm not sure it's already a good time to update the libc used for builds (which, if all goes well in https://github.com/rust-lang/rust/pull/82556, should be different from the runtime libc). Hmm, I believe this also needs to change the function names... The redirects to time64 functions in musl are implemented via headers, but linking to clock_gettime will get you the time32 impl, while the time64 impl is: __clock_gettime64. If Rust code is going to pass the modern time-related data structures with 64-bit fields, it needs to honor (ideally derive somehow rather than hard-coding, but they're not going to change so you could just ship the table) the symbol name redirects from the libc headers. But if it's still passing 32-bit versions, it's fine, and will continue to be fine, to reference the legacy symbol names. As such I don't think anything is broken with regard to targeting musl 1.2.x right now. But there may be problems mixing Rust code with third-party C libraries that have already been built using the new types. Could we keep time_t as 32-bit and expose a separate set of functions which use time64_t? Could we keep time_t as 32-bit and expose a separate set of functions which use time64_t? I don't see how this would help the situation any. Pushing the choice of using time64 symbols on programmers will make them almost entirely useless, and people will have to find some way of deciding which symbols to use, at which point it'd just be better if libc could give them the appropriate version directly. This is somewhat related to #2061... Could the libc crate try to build binaries (for the target, since host information is independent) in build.rs which link to symbols introduced in new versions of a certain library, in order to determine its version? So in this case, something like __clock_gettime64. And then create a file which the rest of the headers use to determine what musl version should be targeted? Ideally libc would be a wrapper around bindgen, IMHO, but that would require shipping headers for platforms whose information is instead contained in libc in hardcoded form. Given that this seems to be a necessarily backwards-incompatible change, would it potentially make sense to coordinate this with a potential switch of the musl target from static to dynamic? It isn't necessarily backwards incompatible; I believe the BSDs have already requested some mechanism to support ABI incompatible versions of their own platforms. If such a thing was implemented, it'd be possible to carry the two versions "just fine". Given that the rust ecosystem seems to dislike feature tests (which would be the usual way of implementing such a mechanism), it will probably end up being backwards-incompatible. Anyway, I think both of these incompatible changes are very different in what they touch (default linkage vs minimum supported version), so I don't see much of a reason to coordinate them. Could be convinced differently. Then, rather than risking runtime incompatibility and difficult-to-debug brokenness (e.g. a -sys crate that's building Rust bindings to a C library that uses time_t in its interface, such as openssl-sys), I think it'd make sense to just start requiring musl 1.2 at compile-time and runtime. The target tier policy has a provision for how to upgrade the minimum requirements for a target; I think we should use that to bump the baseline requirement for the -musl targets to 1.2. Requiring musl 1.2 would make sense for (old - the eventual official riscv32 target should be legacy free in this sense) 32bit targets, but not for 64bit ones. Requiring musl 1.2 would make sense for (old - the eventual official riscv32 target should be legacy free in this sense) 32bit targets, but not for 64bit ones. That's a good point; the ABI hasn't changed for 64-bit targets, so the baseline wouldn't need to change. That said, how would you detect the version? Or would it be simply a contract? We wouldn't need to detect the version. We would bind time-related symbols to the new 64-bit versions that don't exist in older versions of musl, and then we'd fail to link with older versions of musl. Exactly. That seems very reasonable to me, and it automatically does the right thing for 64-bit targets without encoding more than the minimal needed knowledge (the symbol redirections). BTW is Rust already encoding that sort of redirection for glibc targets, where it's needed to use most of the standard unix and stdio file functions (since the legacy symbols on 32-bit archs have 32-bit off_t)? I've read through this thread and I'm trying to get a sense for what needs to be done to move forward the upgrade to musl 1.2.2. I believe this is the current state of this thread: We will upgrade to musl 1.2.2 and require that version at runtime. Per RFC 2803 this requires approval from the compiler and release teams. This is a breaking change for the i686-unknown-linux-musl target but not the x86_64-unknown-linux-musl target. There was a suggestion to bundle this breaking change with a change for the musl targets from static to dynamic but consensus seems to be that we don't need to do these at the same time. There was a suggestion to expose a time64_t type but consensus seems to be that we should not go down that route. There is the question of handling the redirects musl provides for 32-bit compatibility. Is this still a concern if we've decided to make this a breaking change to the target? I'm sure I've missed something so please feel free to correct me. I'd like to help push this along if possible 🙂 There is the question of handling the redirects musl provides for 32-bit compatibility. Is this still a concern if we've decided to make this a breaking change to the target? In general, legacy binaries will continue to run as expected. From a source-compilation POV, musl only provides the 64-bit versions by default. So, you should just be able to upgrade and make time_t 64-bit. If we currently defaulted to dynamically linking musl, I would want to have more of a transition plan for requiring a newer version. But since currently people would have to go out of their way to dynamically link musl, it seems unlikely that bumping our requirement forward will cause much grief. In any case, this PR is incomplete as it is: if we're changing the type definitions for the libc types then we must also change the functions to link to the _time64 symbols instead of the normal ones. In the musl C headers this is done using the __REDIR macro, the Rust equivalent is the link_name attribute. Thanks for the responses @kaniini, @joshtriplett and @Amanieu! It seems like the next step then is to change our functions to link with the new 64-bit symbols where applicable. After that, I guess we'll start an FCP to get approval to make this change to the target? Sounds good! Will it cause any problem if I use rust (stable) with musl 1.2.2 without this code change? Rust will work fine, the only issue is that it will use 32-bit time_t on 32-bit platforms. I had a look at adding the link attributes to redirect things to time64 symbols for musl+32bit, and I think that part seems fairly straightforward (although there are fair number of changes). However, I did run into an issue that is not related to the rename. When building the generated main.c file, I see a failure: struct input_event has no member named time. We define that as a timeval field, which is evidently not correct for musl with time64 on 32bit. Not sure what the correct definition here is as we make this change? It seems like an interesting 32 vs 64bit difference and so I am curious how libc should manage the transition? The time member of struct input_event is deprecated because it was wrongly declared with type struct timeval but can't match the type, since it always has __kernel_ulong_t for seconds. Instead applications should use input_event_sec and input_event_usec. It's still provided for backwards compatibility with old sources on 64-bit systems and legacy time32 systems, but programs using the old name won't work on time64, so they need to be fixed. Thanks @richfelker for the details. @Amanieu, any suggestions from the libc maintainer perspective on the right approach here? In terms of managing the breaking change I don't see any great options, the best I have are one of: introducing methods on input_event to access/set the sec and usec parts, dropping time on time64 (since we can't use timeval) and (optionally) marking time as deprecated elsewhere, or removing time everywhere and adding input_event_sec and input_event_usec fields instead, dealing with the fallout. I was also looking for inspiration from previous libc changes, but it looks like when we added 64-bit file offsets we did so by adding new functions (not redirecting to 64-bit symbols as proposed here) and I don't see that we have added time64 support for other linux targets. I think the best way forward here for the libc crate is to try to stay compatible with the glibc bindings. In practice, most Rust code is developed on the x86_64-unknown-linux-gnu target so we want the musl target to be compatible with that. Currently the glibc bindings in this crate use a 32-bit time_t. The first step would be to add 64-bit time functions to the glibc bindings (e.g. __localtime64). See this for more information on 64-bit time in glibc. Once that it implemented, we can expose 64-bit time in musl using the same types and symbols as the glibc bindings for source compatibility. Finally, we can switch the Rust standard library to use the libc functions with 64-bit time (perhaps with some run-time detection for backwards-compatibility). In practice most Rust code doesn't use the libc bindings to get the current time, it will use Instant and SystemTime from the standard library. Thanks @Amanieu. I think this means that we should instead be moving musl forward 1.2.x without enabling time64 for 32 bit targets right now. I thought the earlier consensus was to try and couple multiple potentially disruptive changes but I think the update is worthwhile even if is a longer term project to get time64. My other investigations also seem to confirm this is the right approach; updating the targets that use the in-box musl will require the musl 1.2.x update in rust to link so doing it first is helpful, and it appears we have multiple sources of musl in libc testing as well, including at least one target (a mips target for openwrt) that is currently musl 1.1.x until a future stable (22.xx) release. So we can’t (yet) require a time64 enabled musl here. I would really encourage rethinking that and moving to the 64-bit time_t now while it's easy rather than getting stuck with something that's already at EOL (many users need to represent times at now+16 years). The time32 support in musl >1.2 is "ABI-compat only"; it's not intended for new programs, and continuing to use it will make it hard to interoperate with C and C++ libraries which will necessarily be built with 64-bit time_t. This is unlike glibc, where there are 2 build options and 32-bit is still the default. We've even been asked about options to build a libc without the legacy time32 symbols, which would be helpful to users who want to make sure their binaries are all Y2038-ready and not using legacy EOL'd interfaces; presently the main thing stopping this is autoconf (where many broken link-only tests check the symbol without including the headers, and would thereby get wrong results). Of course this would not be suitable for a systemwide shared libc that's intended to be able to run existing binaries, but I bring it up to highlight that time32 is not intended as supported API, just ABI. Since musl 1.2 is ABI-compatible with musl 1.1 we should be able to just upgrade it without any problems. I feel that the correct place to make the transition to 64-bit time is in the Rust crates using libc (in particular std) rather than the libc crate itself. Essentially we would just have those crates switch to using time64_t-based APIs instead. This is similar to how we currently handle off_t vs off64_t: https://github.com/rust-lang/rust/blob/master/library/std/src/sys/unix/fs.rs I can see 2 ways of making the transition to 64-bit time_t. The first option is to make a breaking change in the libc crate and simply change time_t directly. This is likely to break existing crates in practice since Rust is much less forgiving than C with regards to implicit conversions between integer types. The second option is what I described above. We would keep time_t as a 32-bit type in the libc crate and add a separate time64_t type with associated functions. The original functions would be wired to musl's 32-bit time_t ABI and the time64_t functions would be wired to musl's 64-bit time_t ABI. #[deprecated] attributes on the 32-bit time_t function can be used to encourage people to migrate to the 64-bit time_t API. I personally favor the second option: the deprecation warnings will appear every time any code uses the 32-bit time APIs in libc, even in a transitive dependency, so there is little risk they would be ignored. This is likely to break existing crates in practice since Rust is much less forgiving than C with regards to implicit conversions between integer types. I don't see how you can claim it could break anything at the source level. Any program that builds on 64-bit archs already has time_t as a 64-bit type. The only possible breakage I see is ABI between dynamic-linked code using the old ABI with time_t in public interfaces, and Rust is not even encouraging the use of such dynamic linking at present as far as I can tell. Nothing else should break. musl has always been very intentional about not having foo64_t types, and I would really really like for Rust to not try to impose that on us. A crater run might be tricky since libc is published as a normal crate on crates.io rather than shipped as part of the standard library. It's probably possible but I'm not familiar enough with crater. but if we change time_t to be 64-bit (pub type time_t = i64;) then it will fail to compile: That code already fails to compile on any 64-bit target, so nobody is going to have written that. If they did they already got a bug report that it breaks on 64-bit targets. Oh, right, duh! Perhaps we could audit some of the top reverse dependencies on crates.io or something to see what kind of impact this might have? libc has 4470 reverse dependencies... I don't really fell comfortable making this change without some sort of crater run, but otherwise I'm happy to go ahead with a 64-bit time_t if there aren't too many regressions. Perhaps we can configure crater to run with a [patch.crates-io] in the Cargo.toml of the crates being tested? I'm not too familiar with crater and don't really know who to ping for this. I took some time to look at this and wanted to see if we could get things moving along again. I have a branch here that includes some additional commits that I would be happy to have pulled in here. Those commits include redirects for methods with time parameters, including functions related to the stat structure which contains times (and for which I also had to perform updates). Unfortunately, some of the (tier 3) platforms with musl support (hexagon and riscv32) do not have that musl support upstreamed and so we will need to decide how to proceed with getting those updated (assuming that a musl 1.2 port has been performed for them). In CI today we also use a toolset from OpenWRT, which has a release candidate with musl 1.2 so we would need to update to that (or the final release when it is made). Thanks @danielframpton! I think that with your changes, we should be able to make the migration to musl 1.2. Could you explain what the situation is with riscv32 and hexagon? Are the musl ports for those platforms still stuck on musl 1.1? Could you explain what the situation is with riscv32 and hexagon? Are the musl ports for those platforms still stuck on musl 1.1? I've been working with @danielframpton on this and the problem we ran into was that we haven't been able to test any changes for these Tier 3 platforms because they have no CI or bors coverage. hexagon seems to have a musl 1.2.2 fork here and riscv32 support hasn't landed in upstream musl either as far as we can tell. Currently, the branch linked above doesn't really make changes to either target since we just weren't sure what to with them. Given that these are Tier 3 targets, we thought this was probably ok and they can be fixed in later PRs by people who can validate their changes. The other thing worth mentioning is that mips-unknown-linux-musl and mipsel-unknown-linux-musl use the OpenWRT distro in CI which doesn't currently have a stable release with musl 1.2. There is an upcoming release which includes musl 1.2 so we could choose to use a pre-release version of OpenWRT for those Tier 2 targets for now. Could you explain what the situation is with riscv32 and hexagon? Are the musl ports for those platforms still stuck on musl 1.1? No, there is no old time ABI for new/future archs and no symbol redirections. All new archs are natively time64 from the beginning with the unadorned symbol names. This also applies to any old archs being newly added to musl later, like for example sparc32. Since the condition is more complex than just cfg(all(target_env = "musl", target_pointer_width = "32")) then we should instead have a musl_time64_abi cfg set by the build script which is enabled when we need to link to the time64 symbols instead of the normal ones. The condition is basically "target is musl and __time64 symbol exists in libc". I would actually prefer a hard-coded list of archs. We don't have an easy way to check for the existence of a symbol in a library. We could attempt to probe for it like autoconf does but this would significantly impact the build time of the libc crate and would require adding additional build-time dependencies (cc). OK. Since the list of affected archs is non-expanding (just those existent at the time time64 was introduced) it's not really a big deal to hard-code it. Hey everyone, just wanted to provide an update on where we're at. I've been working on implementing @Amanieu's feedback about introducing a musl_time64_abi cfg flag to control when we link to the time64 family of symbols (in my musl-1.2 branch). That's been implemented in 6468d63bfdbe064a27b36fbfc63208d052be77d6. Along the way, I discovered that when libc-test runs in CI, it doesn't actually link with the version of musl CI is testing, it uses the musl bundled with rustc's target. I've opened #2893 to fix that (it's the first commit in my branch). After fixing CI, that uncovered a number of other changes we needed to make for various targets (d76bd2f47c9ffdc74f9b2ffdf3e2c4cbbdcf0f99...94d7b3a97bbe5428eb2eea19e868129e849c7130): mostly just adjusting constants and field orderings. Finally, I also updated the mips CI runners to use OpenWRT 22.03 rc6 artifacts so they can be tested with mips 1.2.3 (ede8be23f05e01ff890cbf00f727c3d590564b26...b79bf16c06637c8aad277b63d8b9aacc52dd12a8). At this point, I believe all of the prior discussion points have been addressed and the next step is to figure out how we can do a crater run to see what breaks in the ecosystem. I hope to have some time later in the week to work on that. A first crater run of these changes has been completed for i686-unknown-linux-musl. I'm still analyzing the full results but to give a brief summary: 5749 regressions out of 248247 packages tested. 104 root regressions. 56 root regressions appear to be unrelated to this change (flaky tests, a few nightly compiler ICEs, etc) and I believe can be ignored. Of the remaining 48 regressions, 27 of these are because timespec now contains private padding fields on i686-unknown-linux-musl where it previously didn't. As such, a timespec struct can no longer be created using struct literal syntax. @Amanieu how does libc typically handle this kind of change? I'm thinking it would make sense to target some of the most used crates in this category and update them like this so that they can build with both older and newer versions of libc. Where are these padding fields coming from? I don't see them in our definition of struct timespec. Here on the Rust side which corresponds to here on the musl side. If we don't include the trailing padding on little-endian, then the size of timespec will shrink from 24 bytes to 16. Is there a different way you're thinking of to keep the size of the struct the same? That seems wrong: the size should be 16 bytes on all platforms. Looking at the C code, the padding is only added when sizeof(time_t) != sizeof(long) (i.e. on 32-bit targets only). If we don't include the trailing padding on little-endian, then the size of timespec will shrink from 24 bytes to 16 bytes. Is there a different way you're thinking of to keep the size of the struct the same? I'm confused. The size of struct timespec is 16 bytes on all musl archs (with time64). The form is 8 bytes of tv_sec followed by 8 bytes of padding together with tv_nsec. On 64-bit archs there is no padding because tv_nsec (long) is already 8 bytes. On 32-bit archs, tv_nsec (long) is only 4 bytes, and the 4 bytes of padding appear on whichever side makes the location of the significant bits line up with where they would be in a 64-bit word with the arch's endianness. If you're getting it as 24 bytes somehow, then Rust has a wrong definition for the type. Of the remaining 48 regressions, 27 of these are because timespec now contains private padding fields on i686-unknown-linux-musl where it previously didn't. As such, a timespec struct can no longer be created using struct literal syntax. In C, creation of a struct literal timespec is not valid without designated initializers. I don't know what the Rust equivalent is, but it's fundamentally wrong to assume an ordering of the members or absence of any additional members. Code doing so is buggy. The spec is: The <time.h> header shall declare the timespec structure, which shall include at least the following members: time_t tv_sec Seconds. long tv_nsec Nanoseconds. (Emphasis mine.) In C, creation of a struct literal timespec is not valid without designated initializers. I don't know what the Rust equivalent is, but it's fundamentally wrong to assume an ordering of the members or absence of any additional members. Code doing so is buggy. The spec is: Rust only has one kind of initialization for structs, which is similar to C designated initializers. Except it requires all fields to be specified, whereas C will zero-initialized any omitted fields. The "proper" solution would be to zero-initialize the struct and then fill in its fields one-by-one as done here. The downside is that this is much more verbose and requires some unsafe code. Sorry, that's my bad. I was looking at x86_64 in godbolt and messed up the code while trying to get it to compile there. I see what you're saying now and that should work fine. I'll try that and see what the crater results look like. I see what you're saying now and that should work fine. So that actually does not work fine. I thought i64 on i686 has 8 byte alignment and therefore the overall structure would have 8 byte alignment, forcing the final, trailing padding bytes, but it only has 4 byte alignment. However, without the explicit trailing padding, timespec only ends up taking 12 bytes instead of 16. @Amanieu I think that means we have to include the private padding field which will prevent use of the literal struct initialization syntax unless you have another idea how we can force sizeof(timespec) == 16 without private fields. Indeed, setting alignment to 8 does not work for archs without a full alignment requirement for 64-bit types, and of course doesn't solve the big endian case where the padding is before tv_nsec not after. Maybe this is a sign that Rust should have [an extension to do?] default initialization like C does for unmentioned members. I think the core issue here is that the Rust feature is incompatible with structures that may have implementation-defined or internal/private fields that are not part of the API, and thus that application code is not allowed to mention by name. ..zeroed() copies any unmentioned field values from the return value of zeroed(). Note that they'd still have to be public, and #[non_exhaustive] doesn't help either (due to how FRU syntax is desugared). But we don't strictly have to match names with what musl calls its unmentionable fields and what we call them, so I guess that's fine. @kaniini Could you pull the changes from my musl-1.2 branch into this PR? I've rebased and verified that a bors run will be green. After that, I think we're ready to start an FCP! 🙂 Closing in favor of https://github.com/rust-lang/libc/pull/3068, thanks for the PR!
gharchive/pull-request
2021-02-28T15:21:58
2025-04-01T06:45:41.170775
{ "authors": [ "12101111", "Amanieu", "JohnTitor", "danielframpton", "ericonr", "joshtriplett", "kaniini", "richfelker", "rust-highfive", "thomcc", "wesleywiser" ], "repo": "rust-lang/libc", "url": "https://github.com/rust-lang/libc/pull/2088", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1360904043
Support BCRYPT_RNG_ALG_HANDLE rust-lang/rust#101325 I haven't tested this on a Windows host, brace for CI... cc @ChrisDenton @thomcc does this implementation make sense? @bors r=ChrisDenton :pushpin: Commit ee1c1e6d7850cc5ff366b5e1855918e0fc1d80b5 has been approved by ChrisDenton It is now in the queue for this repository. :hourglass: Testing commit ee1c1e6d7850cc5ff366b5e1855918e0fc1d80b5 with merge ec43f1dd9b8bfed8939e8df8424d329d7e1c0253... :sunny: Test successful - checks-actions Approved by: ChrisDenton Pushing ec43f1dd9b8bfed8939e8df8424d329d7e1c0253 to master...
gharchive/pull-request
2022-09-03T16:16:47
2025-04-01T06:45:41.186572
{ "authors": [ "bors", "saethlin" ], "repo": "rust-lang/miri", "url": "https://github.com/rust-lang/miri/pull/2533", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
94359467
SIMD groundwork Rendered. X-posting my late comment from the pre-RFC: What I was saying was a bit more than that. From my earlier post, these may not need to be intrinsics at all - one-instruction-of-asm!() functions with #[inline(always)] and proper register specifiers on the asm!() can do the job except for the quirky magic structural typing. The ergonomics of strict types, whether [u32; 4] or Simd4<u32>, really aren't that bad for low-level building blocks that will mostly live behind prettier interfaces. What problems there are with the ergonomics can be largely resolved with T: Structural<Layout=[u32; 4]>> + SimdSafe, where SimdSafe is a marker trait denoting the same things as #[repr(simd)], and possibly added by it. And yeah, I edited "alignment voodoo" into my post before you mentioned that :stuck_out_tongue: Anyway, the result of the above is that one only really needs two changes to the compiler: #[repr(simd)] #[lang_item="simd_repr_marker"] (added by #[repr(simd)]) Structural can be done without any help from the compiler, but would benefit a lot from a #[derive] But as far as benefits, this avoids a large mass of worryingly magical (regarding parameter types) intrinsics being added to the compiler. these may not need to be intrinsics at all Unfortunately if we want to get SIMD on stable Rust this would necessitate a stable asm! macro, and we've got a much stronger story for stabilizing these intrinsics than we do that macro What problems there are with the ergonomics can be largely resolved with T: Structural<Layout=[u32; 4]>> + SimdSafe, where SimdSafe is a marker trait denoting the same things as #[repr(simd)], and possibly added by it. I believe a core aspect of this RFC is that it's stabilizing the absolute bare minimum of what the compiler needs to support SIMD. If we add in a few language traits and other various types here and there it's more surface area that will have to be stabilized. If this stuff can be built externally in a library, that'd be great! This RFC, however, is just focused on the compiler support. This RFC, however, is just focused on the compiler support. Sure! I just think that the way it makes the intrinsics unify types with sufficiently similar layouts under #[repr(simd)] is neither necessary nor advisable, and wanted to lay out a library-based alternative. some subset of intrinsics magically ignoring that feels very questionable to me. Hm, maybe I haven't been clear. They're not really ignoring it. Suppose we have: #[repr(simd)] struct A(f64, f64); #[repr(simd)] struct B(f64, f64); extern { fn some_simd_intrinsic(x: A); } It's not legal to call some_simd_intrinsic(B(0.0, 1.0)) (type error). The "structural typing" of intrinsics just means it's valid to also write the above and (if desired) extern { fn some_simd_intrinsic(x: B); } elsewhere. I see it some what similar to how importing C functions work: if some C function takes a struct Foo, we don't require that there's some single canonical type (or structural generic) always passed to every import of that C function, just that the function passed has the right layout. (That said, it's slightly different, since the compiler doesn't enforce that arguments have the right layout for C functions, whereas the SIMD intrinsics do have layout-enforcement.) Ah, that does make rather more sense, and is not the impression I had gotten. There would additionally be a small set of cross-platform operations that are either generally efficiently supported everywhere or are extremely useful. These won't necessarily map to a single instruction, but will be shimmed as efficiently as possible. shuffles and extracting/inserting elements comparisons Lastly, arithmetic and conversions are supported via built-in operators. The Motivation section mentions how this RFC aims to provide just some ground-work on top of which nice SIMD functionality could be built. While builtin arithmetic, shuffles etc for repr(simd) types is nice and convenient, providing it at this level seems questionable. I think something like this could be accomplished inside the to-be-written SIMD library as well with some operator overloading and the intrinsic functions for basic arithmetic. The indices iN have to be compile time constants. I have a bad feeling about this. A regular method call shouldn't require parameters to be compile time constants. Using generics to express this requirement as shown here depends on #1062, but it would be a much cleaner solution. Out of bounds indices yield unspecified results. Wouldn't a compile error be nicer here? That aside, when reading this RFC I had the same thought as @eternaleye: Why implement compiler magic when these functions could be implemented in plain rust with asm!()? The absence of stable inline asm in a systems programming language is annoying and while this RFC attempts to make up for that by simply importing all C/C++ intrinsics into Rust, this set grows regularly, requiring a compiler update every time when an update of the system LLVM would be sufficient. (Well, it's mostly the maintenance overhead for rust. Why maintain a set of intrinsics when someone else is already doing it?) Additionally, inline asm allows the programmer to influence things like instruction scheduling and register allocation (within the asm section), in case the compiler is doing a bad job in that regard. So I'd suggest solving the "Operations" section of the RFC in a way that doesn't require any compiler changes (at least not specifically for SIMD). I'm unsure about what the repr(simd) introduced here really does. I guess its primary purpose is signaling to the compiler that this struct can live in the SIMD registers and be subject to SIMD operations (like this builtin arithmetic). People often argue that vectorization is best left to the compiler and as Rust uses LLVM, many simple cases can benefit from great optimizations. But unfortunately optimizers aren't perfect and simply unable to handle sufficiently complex code, so obviously an explicit way for programmers to express how they want code to be vectorized is necessary. Somewhere above, I suggested removing even those basic operations from repr(simd) types and doing them in intrinsics (or preferably inline asm) instead. (Danger ahead: I fear these ideas might require significant LLVM changes and therefore be infeasible) Then we could think further and consider removing repr(simd): The inline asm constraints (or the intrinsics) should ensure that these values stay in the SIMD registers while we're working with them. While we aren't, it's up to the compiler. As a result, the compiler could decide to hold even non-SIMD values inside the SIMD registers which is probably not a big issue as I remember the Intel optimization manual mentioning how spilling to XMM registers can be faster than spilling to memory. So to summarize, I'm in favor of simplifying/breaking up the "types" and the "operations" sections of the RFC by building on top of much less specific compiler features and pushing as much work as possible into the to-be-written SIMD crate (and moving the "platform detection" section into its own RFC). This feels appropriate as the RFC's intention was just basic groundwork. Or in other words: I'm arguing that "the absolute bare minimum of what the compiler needs to support SIMD" is zero. The missing parts aren't necessarily SIMD-specific. @main-- I'm unsure about what the repr(simd) introduced here really does. I guess its primary purpose is signaling to the compiler that this struct can live in the SIMD registers and be subject to SIMD operations (like this builtin arithmetic). Can live in SIMD registers (though, not sure how it'll handle someone trying to apply it to [u64; 37] or other silliness) Interior references are forbidden Tweaks the in-memory layout to match SIMD for platform Subject to SIMD operations subject to SIMD alignment constraints @huonw, did I miss anything? ISTR you mentioning it didn't validate things like that they need to be homogenous, and that such things would be left up to impl'ing some unsafe trait Arithmetic identity-based optimizations apply just as well to SIMD operations as they do to non-SIMD operations. By using the intrinsics as opposed to inline asm we give LLVM the ability to do those optimizations. @pcwalton Oh, I didn't know that! Yes, that's a big advantage of the intrinsics then. Concerning the structural typing when importing the intrinsics: Please be careful that this does not end up allowing people to "peek through" private abstractions of data-types. That would be a horrible mess of a safety issue. Essentially, such merely structural typing should only be allowed if the module had access to all the fields of the type anyway: Either because they were all public (all the way down), or because the type was defined in the same module. What's the reason for choosing this unconventional approach to typing, rather than using tuples, or arrays, or lang-items for the Simd* types? Just a note to all the people that keep mentioning inline assembly: asm! is pretty much a black box for LLVM. You can tell it a decent amount about the contents, but at the end of the day, it's going to have to take a conservative approach to it. LLVM can reason a lot more about regular operations on vectors (arithmetic, equality, shuffling) and intrinsic functions than it can single-instruction inline asm segments. The Motivation section mentions how this RFC aims to provide just some ground-work on top of which nice SIMD functionality could be built. While builtin arithmetic, shuffles etc for repr(simd) types is nice and convenient, providing it at this level seems questionable. I think something like this could be accomplished inside the to-be-written SIMD library as well with some operator overloading and the intrinsic functions for basic arithmetic. I agree that it isn't totally necessary to actually use the arithmetic operators: we could instead use a generic intrinsic similar to the comparison operators. However, I think it is important we do more than the platform intrinsics: LLVM (and compilers in general) knows more about its internal add instruction than arbitrary platform specific intrinsics, and so may be able to optimise it more aggressively. For shuffles, the optimisation applies: the compiler can e.g. simplify a sequence of shuffles into a single one. Also. the RFC discusses this. One point in it is the compiler synthesizing an optimal (/close to optimal) sequence of instructions for an arbitrary shuffle, instead of forcing the programmer to think about doing that themselves. I have a bad feeling about this. A regular method call shouldn't require parameters to be compile time constants. Using generics to express this requirement as shown here depends on #1062, but it would be a much cleaner solution. This isn't a regular method call: intrinsics are special in many ways. Note that my solution on #1062 that you link to just calls the intrinsic. This is the low-level API, people generally won't be calling the intrinsics directly. Wouldn't a compile error be nicer here? Yes, sort of. However, using the trick mentioned in #1062 would result in very poor error messages, since the shuffle order may be passed through multiple layers of generic function calls possibly in external crates, meaning the out-of-bounds error is generated deep inside code that the programmer didn't write. Why implement compiler magic when these functions could be implemented in plain rust with asm!()? As other have said, asm! is a black-box, and seriously inhibits optimisations. Additionally, inline asm allows the programmer to influence things like instruction scheduling and register allocation (within the asm section), in case the compiler is doing a bad job in that regard. Neither of these apply to this: the API is essentially exposing individual CPU instructions, i.e. each asm! block is a single instruction. Hence, there's no scheduling benefit, and none of the asm! blocks would use concrete registers: they'd all be "generic", to let the compiler allocate registers as it sees fit. These reasons apply if one was, say, writing an entire inner loop as a single asm! block, but it doesn't apply here. I'm unsure about what the repr(simd) introduced here really does. I guess its primary purpose is signaling to the compiler that this struct can live in the SIMD registers and be subject to SIMD operations (like this builtin arithmetic). Yes. repr(simd) changes how a type is represented. E.g. it changes the alignment, imposes element constraints, and even changes its ABI (for function/FFI calls). Concerning the structural typing when importing the intrinsics: Please be careful that this does not end up allowing people to "peek through" private abstractions of data-types. That would be a horrible mess of a safety issue. It sort-of does, but in a very very restricted way, that's already possible with transmute. What's the reason for choosing this unconventional approach to typing, rather than using tuples, or arrays, or lang-items for the Simd* types? Tuples and arrays don't have the right low-level details. repr(simd) is essentially acting as a lang-item that can be defined multiple times. All of the actual lang items (i.e. #[lang = "..."]) in the compiler can only be defined once in the entire hierarchy of dependencies of a compilation target, which means we'd either have to allow multiple versions of these lang items, or just disallow linking multiple SIMD crates into a project (e.g. two different crates that define low-level SIMD interfaces, or even just versions 0.1 & 0.3 or 1.0 & 2.3 or ... of a single SIMD crate). It sort-of does, but in a very very restricted way, that's already possible with transmute. Transmute requires unsafe. It shouldn't be possible for safe code to violate abstraction boundaries. What you are proposing (if I follow your RFC correctly) is essentially that #[repr(simd)] implies that all fields are public, but the programmer doesn't have to write pub. The restriction that the type used for the intrinsic has to be defined in the same module, shouldn't be a problem for the implementations you envision (with some crate(s) taking care of providing a descent abstraction) , right? Hm, I misunderstood what you were talking about. I'm unsure what the problematic situation you're envisioning could be. Is it something like: Crate foo defines a simd Simd, crate bar depends on foo and loads the Simd type. bar extern's in an intrinsic like extern { fn some_simd_intrinsic(x: Simd); }... and then something bad happens? NB. the only way to call intrinsics is with unsafe. (I'm not against the privacy restriction, I'm just trying to understand the motivation more concretely.) repr(simd) is essentially acting as a lang-item that can be defined multiple times. Is a lang item the best intuition here? Wouldn't a closer analogy be repr(C)? In both cases the semantic content of the type is (mostly) unaffected and you're just specifying its underlying representation, which is mostly relevant at the ABI, rather than the API level. The repr(simd) may not enforce that any trait bounds exists/does the right thing at the type checking level for generic repr(simd) types. As such, it will be possible to get the code-generator to error out (ala the old transmute size errors), Would it be possible to just make this best-effort, and fall back to laying the type out normally if it doesn't meet the SIMD requirements (perhaps with a warning)? That seems cleaner than implicit requirements at the code generator level, which feels like a contract violation (if it passes the typechecker, it should compile), and perhaps more in the spirit of #repr. It is illegal to take an internal reference to the fields of a repr(simd) type, because the representation of booleans may require modification, so that booleans are bit-packed. I wonder if the fact that the borrow checker enforces limits on observability wouldn't actually let us support the interior reference-taking, just in a less efficient way, by first copying the field out onto the stack when taking a shared & reference, and in the case of &mut, also copying it back when it goes out of scope. (Off the top of my head, the potential complication that comes to mind here is generic code - copying the value back is effectively a Drop impl for the &mut. But it does seem like it could actually be implemented in precisely that way... at least, I can't immediately think of why not.) In this way it would be even more truly the case that #repr has no effect on the semantics, only on the representation and performance characteristics. (I had a similar idea, earlier, here.) Any type marked repr(simd) automatically has the +, - and * operators work. The / operator works for floating point, and the << and >> ones work for integers. I might be more comfortable with this if you had to explicitly write derive(Add), and so on, to get the desired operations, even if that in turn just ended up calling out to appropriate compiler magic. On the one hand, it does seem logical that "why would you repr(simd) if not to get the SIMD operations", but on the more important-seeming hand, I think there should be a separation of concerns, and #repr should really only affect the representation (as far as possible). NB. the only way to call intrinsics is with unsafe. This essentially means its much less of an issue than I thought. Hm, I misunderstood what you were talking about. I'm unsure what the problematic situation you're envisioning could be. Which part is unclear - whether these rules allow code to get around the restriction that usually apply to private fields, or whether getting access to private fields is an issue? The answer to the latter is that it violates parametricity - for now I'll just assume that as accepted. Please tell me if I should elaborate on that. Unsafe code (through transmute) can violate parametricity anyways, but I would still prefer if no additional violations would be introduced. Regarding the first part, let me try to come up with some examples. I assume "A::Simd" is some other crate's Simd type, with all fields private, and B::Simd is our own Simd. If we have fn x86_mm_add_epi16(a: Simd8<i16>, b: Simd8<i16>) -> Simd8<i16> and the types all just match structurally, I could convert any B::Simd to an A::Simd by adding a 0, and choosing the argument types to be B::Simd and the return type to be A::Simd. Similarly, I can convert A::Simd to B::Simd. This gives me full access to all the private fields. If we have fn simd_shuffle2<T, Elem>(v: T, w: T, i0: u32, i1: u32) -> Simd2<Elem>, I can choose T to be A::Simd and the return type to be B::Simd and convert from A::Simd to B::Simd with the appropriate options for the shuffles - and back, with a similar trick. Ah, I think you might be working with the same misunderstanding as @eternaleye, that the intrinsics can be called with any type that matches structurally, it's just that they can be declared with any type that matches structurally. See https://github.com/rust-lang/rfcs/pull/1199#issuecomment-120537682 . (I 100% agree that being able to write a transmute via SIMD intrinsics would be unfortunate.) I had that misunderstanding at first, but the post above was written without that assumption. The first one, regarding add, can't I declare this with fn x86_mm_add_epi16(a: A::Simd, b: A::Simd) -> B::Simd? And the second one, with shuffle, is actually explicitly declared generically, so it can be used with any T, right? Otherwise, why the distinction between explicit generics for shuffle, and implicit for abs? Is a lang item the best intuition here? Wouldn't a closer analogy be repr(C)? In both cases the semantic content of the type is (mostly) unaffected and you're just specifying its underlying representation, which is mostly relevant at the ABI, rather than the API level. I agree that repr(C) is probably closer, however I was responding to a comment talking about lang-items. :) Would it be possible to just make this best-effort, and fall back to laying the type out normally if it doesn't meet the SIMD requirements (perhaps with a warning)? That seems cleaner than implicit requirements at the code generator level, which feels like a contract violation (if it passes the typechecker, it should compile), and perhaps more in the spirit of #repr. I think we can relax this in future if we find the RFC doesn't work well in practice. (It's part of the reason I proposed a hard error.) I wonder if the fact that the borrow checker enforces limits on observability wouldn't actually let us support the interior reference-taking, just in a less efficient way, by first copying the field out onto the stack when taking a shared & reference, and in the case of &mut, also copying it back when it goes out of scope. (Off the top of my head, the potential complication that comes to mind here is generic code - copying the value back is effectively a Drop impl for the &mut. But it does seem like it could actually be implemented in precisely that way... at least, I can't immediately think of why not.) Interesting idea, however it seems relatively complicated, and not worth it for SIMD: efficient SIMD code won't be handling/mutating individual elements like this very much. I might be more comfortable with this if you had to explicitly write derive(Add), and so on, to get the desired operations, even if that in turn just ended up calling out to appropriate compiler magic. On the one hand, it does seem logical that "why would you repr(simd) if not to get the SIMD operations", but on the more important-seeming hand, I think there should be a separation of concerns, and #repr should really only affect the representation (as far as possible). Another alternative is just providing arithmetic intrinsics. The first one, regarding add, can't I declare this with fn x86_mm_add_epi16(a: A::Simd, b: A::Simd) -> B::Simd? Oh, I see. It seems sensible to disallow it. I.e. have nominal equality constraints within a definition. And for the shuffle, there's an implicit checked-at-code-gen link between T and Elem (i.e. Elem is the actual element type of T). Shuffle is generic because it can be used with literally any SIMD type, i.e. it's not restricted to some subset of types with the same structure. (I just pushed a few more commits to (hopefully!) improve the RFC based on the discussion so far.) Oh, I see. It seems sensible to disallow it. I.e. have nominal equality constraints within a definition. That would fix the concrete issue I mentioned, but it would still be used to violate the invariants of other types. Like (warning: silly example) some Simd8<u16> that ensures that all components are either 0 or 1. Imagine that's somehow crucial for safety. If now a 3rd-party module defines the add function that that type, suddenly the invariant can be violated and things can crash. And for the shuffle, there's an implicit checked-at-code-gen link between T and Elem (i.e. Elem is the actual element type of T). Oh, wow. That's very surprising. And I agree with the comment made above that well-typed code should always compile. What's the reason this one is a generic function with code-gen check, while the others can only be imported as non-generic functions? Also, even with the check, how does that help to prevent converting between different nominal types from different modules, that happen to share their element type? That would fix the concrete issue I mentioned, but it would still be used to violate the invariants of other types. Like (warning: silly example) some Simd8 that ensures that all components are either 0 or 1. Imagine that's somehow crucial for safety. If now a 3rd-party module defines the add function that that type, suddenly the invariant can be violated and things can crash. I think this is partly a problem with the choice of types. It is solved by storing a wrapper around u16 rather than a u16 itself (this is explicitly supported: it is the approach most appropriate for booleans when they are defined as either all 0s or all 1s (bitwise)). However I suppose we could use a privacy-based rule as you suggested earlier. Oh, wow. That's very surprising. And I agree with the comment made above that well-typed code should always compile. What's the reason this one is a generic function with code-gen check, while the others can only be imported as non-generic functions? The shuffles/inserts/etc. don't care about either the vector type or the element type at all; it just cares that the input is a vector; having to import a version for every single type that could be shuffled would get... tiresome. As stated in the RFC the well-typing is trivial to achieve with a trait bound on the intrinsic import, e.g. T: Simd<Elem = Elem>, but I do not think we should start hard-coding relatively complicated traits like that into the compiler. At least, not right now. Also, even with the check, how does that help to prevent converting between different nominal types from different modules, that happen to share their element type? You can't at the moment, it's part of what repr(simd) means and why it's strictly opt-in. Specialised semantics shouldn't be added by creating new vector types, they should only be added via special element types or wrappers around vector types. Also handled via your privacy rule. I think this is partly a problem with the choice of types. It is easily solved by storing a wrapper around u16 rather than a u16 itself (this is explicitly supported: it is the approach most appropriate for booleans when they are defined as either all 0s or all 1s (bitwise)). I don't understand. How would a wrapper type help, since intrinsics can be imported based on structural matching, ignoring any wrappers? I don't think the "structual typing" effect is too abstraction-breaking - you can only reach the abstraction-breakage if you declare an intrinsic with a "specially-crafted" signature, and declaring extern functions with "specially-crafted" signatures is already equivalent to a transmute (you can e.g. declare a pointer-sized identity function as for<'a> foo(&'a Foo) -> &'a Bar). The "monomorphization-error" part is a bit more annoying. I prefer to think of it as undefined behaviour detected as compile-time (monomorphized unsafe code can of-course exhibit guaranteed UB even without intrinsics). Theoretically, we could just emit an intrinsics::unreachable() - this can even be part of a useful program, if the UB isn't actually reachable - but we prefer to abort compilation. If I read this RFC correctly, it doesn't allow compiling a single executable or library which can use the best SIMD functionality available at runtime. For instance, an executable compiled for AVX2 wouldn't run on an older CPU, and an executable compiled for SSE2 wouldn't be able to make use of the AVX2 extensions. For C, there's a GCC function attribute (__attribute__((target("...")))) which temporarily changes the SIMD instructions allowed within a single function. That allows you to have for instance a function compiled for AVX2 while everything else is compiled for SSE2 only. And GCC has another cool extension (Function Multiversioning) which builds on the target attribute to automatically select at runtime which version of the function to use. For Rust, it would be like: #[target_feature = "avx2"] fn foo(...) -> ... { // Implementation used when AVX2 is available. // AVX2 intrinsics can be used here, the compiler can generate AVX2 instructions here. } #[target_feature = "sse2"] fn foo(...) -> ... { // Implementation used when AVX2 is not available but SSE2 is available. // AVX2 intrinsics cannot be used here, the compiler cannot generate AVX2 instructions here. } #[target_feature = "default"] fn foo(...) -> ... { // Implementation used when neither is available. // What's available here depends on the compiler command line (same as for functions without #[target_feature]). } The same function is defined three times, but only one will be called at runtime, depending on the available CPU features. The order doesn't matter, it chooses the best one available. @cesarb Function multiversioning is VERY platform-dependent - it requires support from the dynamic linker, because it's built on top of ifuncs. I don't think LLVM can even generate those at present, and they only work on Linux+glibc last I checked. @eternaleye Function multiversioning is VERY platform-dependent - it requires support from the dynamic linker, because it's built on top of ifuncs. It is that way on C. It doesn't have to be that way on Rust. For instance, it could use ifuncs where available, and do something else where not available. @cesarb Perhaps, but that "something else" would also need specified. Something that works along the lines of lazy_static! under the hood could probably function almost identically to ifuncs for example. However, that could be a whole RFC in its own right, and then function multiversioning a second RFC over the top of that. @cesarb to clarify this RFC does not specify such a dispatch mechanism, but to my understanding nothing in it makes that mechanism not possible. If you have specific concerns about the current design that you think might prevent an auto-dispatch mechanism from being implemented, feel free to voice that concern. Otherwise, as @eternaleye says, this RFC is for getting base-line support for SIMD into the language, additional functionality can be worked out with further RFCs. @eternaleye , @Aatch Perhaps I shouldn't have mentioned auto-dispatch, it ended up obfuscating what I wanted to mention, which is the other effect of GCC's __attribute__((target("..."))): it changes the target CPU and/or features while compiling the function. The lack of something like that makes creating your own auto-dispatch mechanism much harder, since you have to compile each variant as a separate library (with different compiler options) and link them together. But I agree that it's a separate feature which can be discussed later in its own RFC. Now, as for the rest of this RFC. I played a bit with SIMD in my blake2-rfc crate, to get the feel of it before giving my opinion. Sorry for the very long comment, but here it goes. First of all, this RFC already does more than one thing: it has four features which can evolve separately. I believe that each one should have its own separate feature flag, so they can be stabilized separately if necessary. Types The repr(simd) attribute is mostly a rename of the current simd attribute. It does two things: declare a type to have a "SIMD vector" representation, and implicitly add a few primitive operations to it. Since the point of this RFC is to allow the creation of SIMD helper crates, and not to be used directly by the end developer, wouldn't it be more flexible to not implement the primitive operations, and instead let the crate developer implement the Add, etc. traits by calling the corresponding intrinsic? That would allow for instance a SIMD crate which implemented a different Mul operation, or which made Add be saturating. Wrapping the SIMD type in a single-element struct just to be able to implement traits on it doesn't work as expected; I tried, and it confused the optimizer so much that the speed halved (it was moving all the time between SIMD and normal registers, even though everything was force-inlined). I don't see the point of flattening the types, instead of allowing only structs with the same element type repeated a few times. That part needs additional justification for why it won't be just adding unnecessary complexity to the implementation. The SIMD types by themselves are already very useful; they have the correct alignment, so they can be passed to SIMD code written in C even if the rest of this RFC isn't stable yet. CPU instruction intrinsics These are intrinsics which map directly to a CPU instruction. We already have a bit of it, via the link_llvm_intrinsics feature, but the proposal in this RFC is cleaner and more complete (as far as I could find, some intrinsics seem to not be available through the link_llvm_intrinsics feature; for instance, I couldn't find the u64x2 shuffle intrinsics). I'd like to see even basic things like adding two vectors exposed as intrinsics (x86_mm_add_epi32 for instance). There are many types of addition, and just a + b doesn't make it obvious which one it is (wrapping? saturating?). And a real-life example of "why not inline asm": LLVM was able to convert a left-shift of 1 (using what you'd call the x86_mm_slli_epi64 intrinsic) into an add of the vector with itself. That would not happen with inline asm. Just leaving this link I found here, as it can be useful: https://software.intel.com/sites/landingpage/IntrinsicsGuide/ Generic intrinsics These are the intrinsics which are the same everywhere, and might map to more than one CPU instruction. It would be better for them to be a separate feature from the CPU instruction intrinsics, since while the design of the CPU instruction intrinsics is uncontroversial (just copy the C intrinsics), this part can lead to more discussion, so they might need more time to become stable. The generic shuffles are useful, but I see only two-vector shuffles. A single-vector shuffle is also useful, and simd_shuffle_single_4(v, 1, 2, 3, 0) is easier to understand than x86_mm_shuffle_epi32(v, 0b00_11_10_01). And sometimes a single intrinsic isn't available; I couldn't find so far a single intrinsic to swap the two halves of an u64x2. (Of course, another option would be to do a simd_shuffle4(v, v, 1, 2, 3, 0), and hope that the compiler can notice that only the first vector is being used.) An alternative design for the generic shuffles would be: simd_shuffle(v, w, s) where s is some kind of Shuffle object, similar to Range (but restricted to be a compile-time constant). That might allow separating the shuffle specification from the call; for instance, one could have a (compile-type constant) fn swap_consecutive64() -> Shuffle which returned a Shuffle which swaps consecutive objects (1, 0, 3, 2, ...). If the vectors are restricted to just "structs with a repeated element type", insert and extract of a constant index don't need an intrinsic; they're just v.0 = ... and ... = v.0 (or v.x = ... and ... = v.x if they're named). Of course, that doesn't work if the index is not a constant, unless the vector was declared as an array (#[repr(simd)] struct u32x4([u32; 4]) as mentioned in the alternatives). The result of the comparison intrinsics needs to be better explained; do they return vectors of bool, or vectors of arbitrary-sized elements where all-ones is true and all-zeros is false (like with x86_mm_cmpeq_epi32 or similar)? cfg(target_feature) The cfg(target_feature = "...") is one of these "how come this doesn't already exist?" things. It's unnecessarily hard to use anything above SSE2 without it. Real-world use Are there real-world examples of how a SIMD crate would look like? While playing with SIMD for my blake2-rfc crate, I wrote a miniature SIMD module (https://github.com/cesarb/blake2-rfc/tree/master/src/simd) with both a "fallback" non-SIMD variant and a SSE2 variant. The same core BLAKE2 code is used for both non-SIMD and SIMD; the difference is completely contained within that SIMD module. While I implemented only what little functionality BLAKE2 needs (basically xor, add, rotate, and a few simple shuffles), it is a working example of a SIMD "crate", and already gives an idea of a few things that would be useful to have from the compiler: a two-vector shuffle intrinsic would be useful, having cfg(target_feature=...) could allow the use of rotate intrinsics and 256-bit vectors, and I'd like a guarantee that the addition wraps. (Unfortunately, my attempt at SIMD did not work as I had expected. When I enable the SIMD code with cargo bench --features="bench simd", it is ~7% slower (blake2b) or only 1% faster (blake2s) than the fallback code. Not only that, but for blake2s the fallback code is somehow faster than the code before it was converted into vectors, even though it's not using any SIMD types or intrinsics!) Does anyone know of other real-world examples of a SIMD crate being used by real code (or even Rust code using SIMD directly without giving up and either using inline asm or calling into C code)? I played a bit more with SIMD in my blake2-rfc crate, and here are my new observations: Not having a way to tell cargo to pass -C target-feature=+neon to the compiler is a pain. Without it, LLVM explodes with a LLVM ERROR: Do not know how to split the result of this operator! message. By the way, the + is required, and it won't warn you if it's missing. Now I understand better the proposed simd_shuffleN intrinsic: it's LLVM's __builtin_shufflevector (aka shufflevector in the LLVM IR). Not having something like it means that (if there's no intrinsic available, which is often the case since LLVM expects you to use shufflevector) you have to unpack and repack by hand, and hope that the compiler guesses correctly that you meant to do a shuffle. Having a way to convert directly between vectors with different element sizes could be useful. For instance, BLAKE2 has many rotates by "easy" amounts (16, 12, and 8 for the 32-bit BLAKE2s; 32, 24, and 16 for the 64-bit BLAKE2b). If I could get the u32x4 or u64x2 and cast it directly to a u8x16 (without changing its bit representation), and I had a decent shufflevector, I could replace these rotates by shuffles. (Of course, for the particular case of BLAKE2, it would be better if I had a "rotate" intrinsic and the compiler knew how to optimize it into a shuffle.) SIMD really makes a difference for 32-bit processors. Having a way to convert directly between vectors with different element sizes could be useful. Actually, we already have it, and it works as expected: mem::transmute(). The only thing missing is a decent shuffle intrinsic (or a "rotate" intrinsic which the compiler knew how to optimize into a shuffle for constant amounts). It would be good to add to the RFC a guarantee that mem::transmute() works and does what is expected (reinterpret the vector register as another type, so for instance a u64x2 can interpreted as a u32x4 without actually changing its contents). (For NEON, I gave up and used inline assembly, since unlike with SSE2 there are no exposed shuffle intrinsics I could find.) @RalfJung I think this is partly a problem with the choice of types. It is easily solved by storing a wrapper around u16 rather than a u16 itself (this is explicitly supported: it is the approach most appropriate for booleans when they are defined as either all 0s or all 1s (bitwise)). I don't understand. How would a wrapper type help, since intrinsics can be imported based on structural matching, ignoring any wrappers? We get to control the exact definition of "structural equality" for these purposes, and I think ensuring that this does (well, doesn't) work is a good idea. @arielb1 The "monomorphization-error" part is a bit more annoying. I prefer to think of it as undefined behaviour detected as compile-time (monomorphized unsafe code can of-course exhibit guaranteed UB even without intrinsics). Theoretically, we could just emit an intrinsics::unreachable() - this can even be part of a useful program, if the UB isn't actually reachable - but we prefer to abort compilation. That's a good way too look at it, although I wonder if it's too much of a post-hoc rationalisation. @cesarb Thanks for the thoughts If I read this RFC correctly, it doesn't allow compiling a single executable or library which can use the best SIMD functionality available at runtime. For instance, an executable compiled for AVX2 wouldn't run on an older CPU, and an executable compiled for SSE2 wouldn't be able to make use of the AVX2 extensions. As @Aatch and @eternaleye said, this RFC is designed for the bare minimum. Fancier dispatch based rules can and should come later. It's definitely something I've been thinking about since starting this work, and was even discussed on the pre-RFC. There's several problems to solve in that space, and it'd be good to get the basic ground-work: there's non-trivial work one can do with SIMD even in a broadly-supported cross-platform manner. (For one, my understanding is that LLVM is still just migrating to an architecture to support per-function target specialisations, so any design proposed for allowing __attribute__((target)) wouldn't be able to implemented in any existing Rust compiler (i.e. rustc) without waiting for/doing a pile of deeper LLVM work.) Since the point of this RFC is to allow the creation of SIMD helper crates, and not to be used directly by the end developer, wouldn't it be more flexible to not implement the primitive operations, and instead let the crate developer implement the Add, etc. traits by calling the corresponding intrinsic? Yes this was discussed a little above, and the RFC has now been changed to use that. I don't see the point of flattening the types, instead of allowing only structs with the same element type repeated a few times. That part needs additional justification for why it won't be just adding unnecessary complexity to the implementation. This is designed to allow struct bool32(u32); #[repr(simd)] struct bool32x4(bool32, bool32, bool32, bool32); However, this is becoming less important to me at the moment. some intrinsics seem to not be available through the link_llvm_intrinsics feature; for instance, I couldn't find the u64x2 shuffle intrinsics Yes, LLVM doesn't offer intrinsics as functions for things that exist in its language. A single-vector shuffle is also useful, and simd_shuffle_single_4(v, 1, 2, 3, 0) is easier to understand than x86_mm_shuffle_epi32(v, 0b00_11_10_01). [...] Of course, another option would be to do a simd_shuffle4(v, v, 1, 2, 3, 0), and hope that the compiler can notice that only the first vector is being used That is the approach that LLVM explicitly uses. I don't know about other Rust back-ends (since they don't exist yet), but it seems trivial to handle lowering simd_shuffle4(v, v, ...) to whatever single-argument shuffle one likes. And sometimes a single intrinsic isn't available; I couldn't find so far a single intrinsic to swap the two halves of an u64x2 I don't understand what you mean. The simd_shuffle2 is explicitly designed for this. An alternative design for the generic shuffles would be: simd_shuffle(v, w, s) where s is some kind of Shuffle object, similar to Range (but restricted to be a compile-time constant). That might allow separating the shuffle specification from the call; for instance, one could have a (compile-type constant) fn swap_consecutive64() -> Shuffle which returned a Shuffle which swaps consecutive objects (1, 0, 3, 2, ...). Yeah, I've considered this. However, I think this is better handled by the higher level APIs, not the raw intrinsics, e.g. impl u32x4 { fn swizzle<T: Shuffle>(self) -> u32x4 { unsafe { simd_shuffle4(self, self, T::N0, T::N1, T::N2, T::N3) } } } trait Shuffle4 { const N0: u32; const N1: u32; const N2: u32; const N3: u32; } struct SwapConsec; impl Shuffle4 { const N0: u32 = 1; const N1: u32 = 0; const N2: u32 = 3; const N3: u32 = 2; } // used like let w = v.swizzle::<SwapConsec>(); If the vectors are restricted to just "structs with a repeated element type", insert and extract of a constant index don't need an intrinsic; they're just v.0 = ... and ... = v.0 (or v.x = ... and ... = v.x if they're named). Of course, that doesn't work if the index is not a constant, unless the vector was declared as an array (#[repr(simd)] struct u32x4([u32; 4]) as mentioned in the alternatives). This definitely has upsides, but there's a few downsides, like the indexing one, and it also sort-of starts to think of/use SIMD vectors as memory objects rather than registers which is their usual. However, the major one is API uniformity, with boolean vectors. Some platforms/instruction sets represent vector boolean results as "wide" booleans, e.g. a comparison of two f32x4's results in (basically) a u32x4 where each element is either all 0s (false) or all 1s (true). But, some platforms represent them as single bits, e.g. the result of that comparison would be a u8 (or u32 or something) where the bottom four bits are either 0 or 1 as appropriate. The field approach doesn't work for such booleans. ... Then again, these concerns are at a higher level than this RFC: I could imagine the extract/replace API being implemented internally by directly accessing private fields, and extract/replace probably don't make much sense with the compact version anyway. Not having a way to tell cargo to pass -C target-feature=+neon to the compiler is a pain. Without it, LLVM explodes with a LLVM ERROR: Do not know how to split the result of this operator! message. By the way, the + is required, and it won't warn you if it's missing. This is https://github.com/rust-lang/cargo/issues/1137. NB. cargo rustc allows adding features in some situations. Having a way to convert directly between vectors with different element sizes could be useful As you say, this is transmute. Doing it safely isn't a concern of this RFC. (For one, my understanding is that LLVM is still just migrating to an architecture to support per-function target specialisations, so any design proposed for allowing __attribute__((target)) wouldn't be able to implemented in any existing Rust compiler (i.e. rustc) without waiting for/doing a pile of deeper LLVM work.) If LLVM can't do that even for C yet, I agree that it's better to wait. And now that I got a bit more of experience playing with SIMD, I can see that it's not the whole answer. It's useful if you write a fully specialized core of your algorithm for each variant, so you have for instance a SSE2 core, a SSSE3 core, and an AVX2 core; but if you write it in a generic way, like in this BLAKE2 core, it's hard to have a specialized variant for each SIMD implementation, since the specialization for each SIMD variant is in a separate module. And that's probably how a generic SIMD crate would be used; that SIMD module I wrote could be thought of as a "mini-SIMD crate with software fallback". Yes, LLVM doesn't offer intrinsics as functions for things that exist in its language. And unfortunately, Rust doesn't expose shufflevector yet. This RFC would fix that. That is the approach that LLVM explicitly uses. From what I've seen, it does something slightly different: the equivalent of simd_shuffle4(v, uninitialized(), ...). I think simd_shuffle4(v, v, ...) is cleaner, and I hope it works the same. I don't understand what you mean. The simd_shuffle2 is explicitly designed for this. And it doesn't exist yet :cry: Fortunately, I've found that extracting and reconstructing works quite well; the compiler was able to determine that I meant to do a shuffle every time I tried (of course, not all attempts at a shuffle are successful; it depends on the instructions the compiler knows how to use). So something like u64x2(foo.1, foo.0) turns into a single shuffle instruction (like in these shuffles, or even in the ones where I abuse mem::transmute - they do work, and generate faster code than the generic version). Yeah, I've considered this. However, I think this is better handled by the higher level APIs, not the raw intrinsics, e.g. Great example! With something like that, it's not that much of a problem to have a 66-argument shuffle intrinsic: it can be hidden behind a saner API. And since most of the point of this RFC, as far as I can now see, is to directly expose more of LLVM's internal SIMD API, if said API can have a 66-argument shufflevector, why not expose it? As you say, this is transmute. Doing it safely isn't a concern of this RFC. The safety needed is just a guarantee that mem::transmute on a SIMD value does what one would expect: reinterpret the contents of the register (or memory) as having a different number of lanes and/or different types, without actually changing its contents. Yes, the result will depend on the lane and type layout (is lane 0 first or last? are the values two's-complement little-endian or big-endian?), but if you are using transmute, you are already deep in the bit-level representation of your SIMD types. And unfortunately, Rust doesn't expose shufflevector yet. This RFC would fix that. And it doesn't exist yet :cry: Ah, so I was a little confused by your commentary, some of it is general comments about the current state of the world (which is quickly disappearing) and some of it is feedback on this RFC in particular. From what I've seen, it does something slightly different: the equivalent of simd_shuffle4(v, uninitialized(), ...). I think simd_shuffle4(v, v, ...) is cleaner, and I hope it works the same. Yes, LLVM will presumably normalise to uninitialised internally. I was just talking about shuffles taking two arguments. The safety needed is just a guarantee that mem::transmute on a SIMD value does what one would expect: reinterpret the contents of the register (or memory) as having a different number of lanes and/or different types, without actually changing its contents I was referring to safety in the Rust sense. I.e. exposing a function that doesn't require unsafe to call, since the fundamental operation isn't unsafe. We get to control the exact definition of "structural equality" for these purposes, and I think ensuring that this does (well, doesn't) work is a good idea. We are on the same side then: There has to be a way to make sure that my type is not considered structurally equal to any outside type. Restricting the check to visible (or just public) fields should provide that, and I think it'd be the cleanest approach. (You could then remove again the restriction that, e.g., add cannot mix different types.) But in the end, I don't care too much how this is enforced. I ported my blake2-rfc crate to the current version of the code available at https://github.com/rust-lang/rust/pull/27169, and here are my comments (you can see what I changed at https://github.com/cesarb/blake2-rfc/commit/9e5a416d79a9e75eb474acb7f6d313c490a23036): The basic intrinsics like simd_add should be gated by the simd_basics feature, instead of the platform_intrinsics feature. The later should be reserved for platform-specific intrinsics. The basic intrinsics like simd_add should be safe to use, why do I need an unsafe block to call them? (The above also applies to shuffles, insert/extract, and compares) When you use repr(C), it implies allow(non_camel_case_types). Perhaps the same could be done for repr(simd)? Why do the shuffle intrinsics have two type parameters? src/simd.rs:220:5: 222:66 error: intrinsic has wrong number of type parameters: found 1, expected 2 [E0094] src/simd.rs:220 fn simd_shuffle8<T>(v: T, w: T, src/simd.rs:221 i0: u32, i1: u32, i2: u32, i3: u32, src/simd.rs:222 i4: u32, i5: u32, i6: u32, i7: u32) -> T; I have to use fn simd_shuffle8<T, Elem>(v: T, w: T, ...) -> T, which means the compiler can't infer all the type parameters, so the call is an ugly simd_shuffle8::<u16x8, u16>(tmp, tmp, ...), instead of the simd_shuffle8(tmp, tmp, ...) I could have if it had only one type parameter. If I add -C target-feature=+ssse3 to the compiler command line, the SIMD code becomes slower. That is, allowing the compiler to use more instructions makes it generate slower code. Why??? I haven't played with cfg_target_feature yet. I wanted to use it to know when I can use SSSE3's pshufb, but since merely enabling SSSE3 made the code slower, I don't know if it'll be worthwhile. Finally, an unrelated thought about the RFC: scatter/gather operations allow (partially) operating on a SIMD vector of pointers. This would require allowing pointers(/references?) in repr(simd) types. Not necessarily. At least the Intel gather instructions (vpgather*) operate on a scaled vector of offsets. For instance, the variant to gather a u32x4 using 32-bit offsets from a base address is represented in C as __m128i _mm_i32gather_epi32 (int const* base_addr, __m128i vindex, const int scale). In Rust, I'd expect something like fn gather32_u32(base: &[u32], index: u32x4) -> u32x4 { ... } (I did an experiment calling llvm.x86.avx2.gather.d.d and llvm.x86.avx2.gather.d.q.256, but the result was much slower. I don't know if it's LLVM's fault, or if that instruction isn't as fast as I'd hoped.) If I add -C target-feature=+ssse3 to the compiler command line, the SIMD code becomes slower. That is, allowing the compiler to use more instructions makes it generate slower code. Why??? Without looking at the generated code, I have two suspicions: Depending on your processor, those instructions could be microcoded and using the direct SSE2 equivalent avoids going through microcode You aren't allowing enough extra instructions so it needs to generate stupid code to emulate something that supports using some SSSE3 feature. Try enabling all the features. Why do the shuffle intrinsics have two type parameters? Because the inputs might not match the output type, if you use simd_shuffle8 you can give it two Simd4s, and get an Simd8 out. @cesarb Thanks for taking the time to experiment with it, I'll be sure to take a look at your code tomorrow. I'll just reemphasise that this API is designed to be the absolute lowest level, for people to build better functionality APIs above, not to be used directly regularly. The basic intrinsics like simd_add should be gated by the simd_basics feature, instead of the platform_intrinsics feature. The later should be reserved for platform-specific intrinsics. The intention is for the feature gate to disappear, so the details don't seem to matter that much to me. Maybe you see a deeper reason they should be separate? Why do the shuffle intrinsics have two type parameters? The current design with generic SIMD types in mind, to allow the signature given in the RFC, so that one can shuffle more than just within a single type, e.g. on an x86 CPU with AVX, one could join two xmm registers (say f32x4) into a single ymm one (f32x8), or, visa versa: truncating a ymm register to an xmm. Shuffles are the general purpose way to handle this sort of munging. The basic intrinsics like simd_add should be safe to use, why do I need an unsafe block to call them? (The above also applies to shuffles, insert/extract, and compares) All extern functions are unsafe to call. Maybe it makes sense to make them sometimes safe to call if they happen to be so, but it's a more general "problem", and, these aren't meant to be called directly. When you use repr(C), it implies allow(non_camel_case_types). Perhaps the same could be done for repr(simd)? That's sounds reasonable. If I add -C target-feature=+ssse3 to the compiler command line, the SIMD code becomes slower. That is, allowing the compiler to use more instructions makes it generate slower code. Why??? This is very likely to be caused by LLVM, not rustc or the functionality I added. It may be worth investigating the IR/asm and filing an LLVM bug if you can narrow it down. -C target-cpu=native might be a useful tool in comparing the native code LLVM wants to generate for your CPU with what it is assuming runs fast with just +ssse3. (I did an experiment calling llvm.x86.avx2.gather.d.d and llvm.x86.avx2.gather.d.q.256, but the result was much slower. I don't know if it's LLVM's fault, or if that instruction isn't as fast as I'd hoped.) Combination of both I imagine. After posting my previous comment, I received an email with a failure from travis-ci which points to another problem: src/simd.rs:33:8: 33:28 error: illegal ABI: expected one of [cdecl, stdcall, fastcall, aapcs, win64, Rust, C, system, rust-intrinsic, rust-call], found `platform-intrinsic` src/simd.rs:33 extern "platform-intrinsic" { ^~~~~~~~~~~~~~~~~~~~ src/simd.rs:41:8: 41:28 error: illegal ABI: expected one of [cdecl, stdcall, fastcall, aapcs, win64, Rust, C, system, rust-intrinsic, rust-call], found `platform-intrinsic` src/simd.rs:41 extern "platform-intrinsic" { ^~~~~~~~~~~~~~~~~~~~ It seems that #[cfg(feature = "...")] was not enough to hide the extern block from the compiler, it insists in validating the ABI name even though it's supposed to be ignoring the whole block. And that's the "stable" compiler, it was supposed to work because I define no cargo features (the "nightly" features are all gated by cargo features in that crate), so the extern blocks were supposed to be ignored. @cmr You aren't allowing enough extra instructions so it needs to generate stupid code to emulate something that supports using some SSSE3 feature. Try enabling all the features. @huonw -C target-cpu=native might be a useful tool in comparing the native code LLVM wants to generate for your CPU with what it is assuming runs fast with just +ssse3 The problem is, target-cpu=native enables AVX2, which somehow makes it go even slower. A few benchmarks with all the cargo features enabled follow. Enabling just the "simd" cargo feature is slower than the non-SIMD code, probably because this CPU can run 4 integer instructions in parallel, and SIMD rotates need 3 instructions, while each non-SIMD rotate needs 1 instruction (and the 4 rotates can run in parallel). The gains when I enable the "simd_opt" cargo feature all come from replacing the SIMD rotates by single-instruction (or two-instruction for u64x4) shuffles. default (the fastest I've got so far): test blake2b::bench::bench_16 ... bench: 198 ns/iter (+/- 2) = 80 MB/s test blake2b::bench::bench_4k ... bench: 5,766 ns/iter (+/- 44) = 710 MB/s test blake2b::bench::bench_64k ... bench: 92,115 ns/iter (+/- 1,288) = 711 MB/s test blake2s::bench::bench_16 ... bench: 141 ns/iter (+/- 1) = 113 MB/s test blake2s::bench::bench_4k ... bench: 7,997 ns/iter (+/- 86) = 512 MB/s test blake2s::bench::bench_64k ... bench: 127,775 ns/iter (+/- 675) = 512 MB/s with -C target-features=+ssse3: test blake2b::bench::bench_16 ... bench: 208 ns/iter (+/- 37) = 76 MB/s test blake2b::bench::bench_4k ... bench: 5,939 ns/iter (+/- 39) = 689 MB/s test blake2b::bench::bench_64k ... bench: 95,356 ns/iter (+/- 833) = 687 MB/s test blake2s::bench::bench_16 ... bench: 141 ns/iter (+/- 1) = 113 MB/s test blake2s::bench::bench_4k ... bench: 7,995 ns/iter (+/- 1,659) = 512 MB/s test blake2s::bench::bench_64k ... bench: 127,766 ns/iter (+/- 758) = 512 MB/s with -C target-cpu=native (which uses AVX2 instructions everywhere): test blake2b::bench::bench_16 ... bench: 230 ns/iter (+/- 7) = 69 MB/s test blake2b::bench::bench_4k ... bench: 6,592 ns/iter (+/- 70) = 621 MB/s test blake2b::bench::bench_64k ... bench: 105,376 ns/iter (+/- 345) = 621 MB/s test blake2s::bench::bench_16 ... bench: 143 ns/iter (+/- 1) = 111 MB/s test blake2s::bench::bench_4k ... bench: 8,181 ns/iter (+/- 57) = 500 MB/s test blake2s::bench::bench_64k ... bench: 130,863 ns/iter (+/- 4,278) = 500 MB/s This is very likely to be caused by LLVM, not rustc or the functionality I added. I agree. I'll just reemphasise that this API is designed to be the absolute lowest level, for people to build better functionality APIs above, not to be used directly regularly. That's what I'm experimenting with: the lib/simd.rs file builds a better functionality API above it, while the lib/blake2.rs file uses it (and required no change when porting to this proposed RFC). The current design with generic SIMD types in mind, to allow the signature given in the RFC, so that one can shuffle more than just within a single type, e.g. on an x86 CPU with AVX, one could join two xmm registers (say f32x4) into a single ymm one (f32x8), or, visa versa: truncating a ymm register to an xmm. Shuffles are the general purpose way to handle this sort of munging. I see. I have one place which might be able to use it: fn u64x4_rotate_right_u8(vec: u64x4, n: u8) -> u64x4 { let tmp0 = vext_u64_u8(u64x2(vec.0, vec.1), n); let tmp1 = vext_u64_u8(u64x2(vec.2, vec.3), n); u64x4(tmp0.0, tmp0.1, tmp1.0, tmp1.1) } Could be replaced by something like: fn u64x4_rotate_right_u8(vec: u64x4, n: u8) -> u64x4 { let tmp0 = vext_u64_u8(simd_shuffle_2(vec, vec, 0, 1), n); let tmp1 = vext_u64_u8(simd_shuffle_2(vec, vec, 2, 3), n); simd_shuffle4(tmp0, tmp1, 0, 1, 2, 3) } But first I'd have to change all the vector types to be generic in the element type. But first I'd have to change all the vector types to be generic in the element type. Which is harder than it looks. This doesn't work: #[derive(Clone, Copy, Debug)] #[repr(simd)] pub struct Simd4<T>(pub T, pub T, pub T, pub T); pub type u32x4 = Simd4<u32>; Since it doesn't find the constructor e.g. u32x4(0, 0, 0, 0). Either I add a fn new(...) -> Self to the type's impl (touching every place that constructs a SIMD value), or I leak the fact that it's a generic type outside the SIMD code (touching every place that uses a SIMD value). I don't know which would be the better design. The traditional Rust SIMD design of u32x4, or the generic Simd4<u32> variant. This SIMD crate prototype uses the former, while this RFC tends towards the later. @cesarb But first I'd have to change all the vector types to be generic in the element type. What was the exact issue you had? I haven't looked at the prototype, but it's possible that it doesn't yet have the relevant functionality to support that case. Please remember that this comment thread is for the RFC, not the prototype implementation, so pointing out issues with the prototype is only relevant if the behaviour is consistent with the RFC. Updated. Highlights: removal of struct flattening and the internal-reference restriction shuffles use @pnkfelix's suggestion for an array of indices (so much nicer!) Hear ye, hear ye. This RFC is entering final comment period. What's the basic plan with respect to stabilization of these features? As long as the idea is just to get these into the unstable compiler, so that the implementation PR can land and we can start experimenting more seriously with higher-level APIs, further evolving the lower-level ones, and whatever else, that's completely cool. But the number of places (there's a few) where we're currently forced to say "enforcing this in the type system is difficult-to-impossible right now, so let's punt it to the backend" still bothers me if it's something we'd be committing to support as a stable feature forever (or until 2.0, anyways). @glaebhoerl I think there is no plan to stabilize until experience has been gained, but of course the further we go down this road, the less likely we'll back up and start from another. Basically, I suspect the approach I'd prefer is that once we have this infrastructure, and we've used it to gain some experience and to figure out the best way to formulate higher-level "type-safe" (i.e. without checks deferred to codegen) abstractions for SIMD (various traits, etc.), we should go ahead and stabilize those. To be clear, I agree with you Gabor -- I'd like to gain experience in what we want to do, first, and then come back and see if we can find the most elegant way to do it. (Not that this approach is unacceptable, it's quite elegant in its own way.) I'm just acknowledging the power of incumbency. :) On Thu, Sep 10, 2015 at 5:45 PM, Gábor Lehel notifications@github.com wrote: Basically, I suspect the approach I'd prefer is that once we have this infrastructure, and we've used it to gain some experience and to figure out the best way to formulate higher-level "type-safe" (i.e. without checks deferred to codegen) abstractions for SIMD (various traits, etc.), we should go ahead and stabilize those. — Reply to this email directly or view it on GitHub https://github.com/rust-lang/rfcs/pull/1199#issuecomment-139387790. It's official. The language subteam has decided to accept this RFC. Tracking issue is https://github.com/rust-lang/rust/issues/27731
gharchive/pull-request
2015-07-10T18:01:12
2025-04-01T06:45:41.330577
{ "authors": [ "Aatch", "RalfJung", "alexcrichton", "arielb1", "cesarb", "cmr", "eternaleye", "glaebhoerl", "huonw", "main--", "nikomatsakis", "pcwalton" ], "repo": "rust-lang/rfcs", "url": "https://github.com/rust-lang/rfcs/pull/1199", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
175765658
Add uninstall instructions for rustup Refs #513 Thanks for the pull request, and welcome! The Rust team is excited to review your changes, and you should hear from @alexcrichton (or someone else) soon. If any changes to this PR are deemed necessary, please add them as extra commits. This ensures that the reviewer can see what has changed since they last reviewed the code. Due to the way GitHub handles out-of-date commits, this should also make it reasonably obvious what issues have or haven't been addressed. Large or tricky changes may require several passes of review and changes. Please see the contribution instructions for more information. r? @brson Thanks @navaati! 🎉
gharchive/pull-request
2016-09-08T14:16:31
2025-04-01T06:45:41.409549
{ "authors": [ "alexcrichton", "brendanzab", "brson", "navaati", "rust-highfive" ], "repo": "rust-lang/rust-www", "url": "https://github.com/rust-lang/rust-www/pull/514", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
987727214
Add KEEPALIVE_TIME for haiku and openbsd (compilation was broken) Compilation on openbsd was broken since KEEPALIVE_TIME was not defined anywhere. SO_KEEPALIVE is not the same as KEEPALIVE_TIME, also see https://github.com/rust-lang/socket2/pull/251. Oh. I read the manpage and it doesn't mention something like that. Would returning an error be acceptable? Otherwise it won't build at all. And on sysctl(3) it mentions KEEPALIVE set on TCP sockets. Oh. I read the manpage and it doesn't mention something like that. KEEPALIVE_TIME is TCP_KEEPALIVE, thus on the TCP socket level, while SO_KEEPALIVE is on the more general socket level (SO). Would returning an error be acceptable? Otherwise it won't build at all. I'm afraid not, but we can remove the function from OpenBSD by using something like: #[cfg(not(target_os = "openbsd"))] on keepalive_time and set_keepalive_time. And on sysctl(3) it mentions KEEPALIVE set on TCP sockets. That is a boolean to enable keepalive, while KEEPALIVE_TIME sets a duration, see TCP_KEEPIDLE for Linux or TCP_KEEPALIVE for macOS.
gharchive/pull-request
2021-09-03T12:34:24
2025-04-01T06:45:42.152639
{ "authors": [ "Thomasdezeeuw", "epilys" ], "repo": "rust-lang/socket2", "url": "https://github.com/rust-lang/socket2/pull/263", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
719043343
Add a blanket impl<T: ToPrimitive> ToPrimitive for &T Not sure if this is a breaking change or not, but I think this makes sense. Yes, it is a breaking change, because &T is a fundamental type. Can you explain why you want to be generic in this way? Maybe it can be done another way, like T: Borrow<U>, U: ToPrimitive to accept either owned or borrowed values. This is specifically for NumCast::from(), which expects an owned T where T: ToPrimitive. Would it be better to change that to accept a AsRef/Borrow<T>? Hmm, I doubt we could change NumCast::from without it being a breaking change. I guess we could add a new method like from_ref, but that would need T: Clone so it can be defaulted to use from, while all of our impls can call specific to_foo conversions without cloning. Personally, I avoid NumCast because I don't like how it overlaps with From::from. Directly using ToPrimitive is also better for &BigInt since the methods only require &self. I'd also suggest the standard TryFrom if possible, stable as of Rust 1.34, but that's not implemented for floating point. Ah, yeah, TryFrom/Into might be better; I can't use ToPrimitive directly because I'm using a libc type definition. Very specific use case :upside_down_face:. I'll close this I guess, but maybe it's something to think if num-traits ever gets a 0.3? Though that's probably unlikely anytime soon.
gharchive/pull-request
2020-10-12T05:01:46
2025-04-01T06:45:42.163364
{ "authors": [ "coolreader18", "cuviper" ], "repo": "rust-num/num-traits", "url": "https://github.com/rust-num/num-traits/pull/191", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
1726929306
git tag "latest" is 3 years old I know this is not really that important but it might trip someon up in the future. Anyways, while looking through the repo I noticed that the git tag "latest" points to a commit from 2018, which is quite obviously not the latests version. Thanks for reporting! I removed that tag to avoid confusion.
gharchive/issue
2023-05-26T05:24:26
2025-04-01T06:45:42.164602
{ "authors": [ "Wasabi375", "phil-opp" ], "repo": "rust-osdev/bootloader", "url": "https://github.com/rust-osdev/bootloader/issues/373", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
700181065
Decouple instructions into a separate feature flag. This would implement what was proposed in https://github.com/rust-osdev/x86_64/issues/177. There is one issue, however - array_init dependency. The crate currently does not compile with no features on. From what I could tell, the dep is only used in PageTable::new, and it very well could be dropped, by replacing PageTable::new with an identical implementation as the const variant, just kept as non-const. However, I did not implement this change, as I didn't know if there was something else at play that needed the function to be the way it is. If this change would be fine, I can go ahead and add it in this PR. Other than that, if there are any other changes that may need to be implemented, please let me know! Edit(phil-opp): This is a breaking change. I'm also not sure why the workflows failed Regarding the const-ness of PageTable::new() on stable: I think we can make the function non-const for now and drop the array-init dependency. This is going to be a breaking change anyway because of the new feature gate names, so I think that's ok for now. Final nail in the coffin is the addr module, which is dependent on by the OffsetPageTable. Lots of functions are marked for only 64-bit pointer width, and that is very fair - lots of silent trunction errors can lead to the oddest of behaviours. I am wondering, however, if it wouldn't make sense to allow to work with 64-bit addresses on 32-bit machines? At the last as_ptr step it would be possible to inject a truncation debug assertion, given this brings enough breaking changes as it stands, and if I'm not mistaken adding extra functionality wouldn't be a bad thing. From what I can tell there are 2 options to go about the issue - either mark OffsetPageTable as a structure that depends on 64-bit pointer width, or making the addresses function on non-64-bit architectures. Of course, there can be a third solution, but I don't see it right now. Please tell me what you think about it. And how should I proceed? I am wondering, however, if it wouldn't make sense to allow to work with 64-bit addresses on 32-bit machines? At the last as_ptr step it would be possible to inject a truncation debug assertion, given this brings enough breaking changes as it stands, and if I'm not mistaken adding extra functionality wouldn't be a bad thing. I don't think that that panicking at runtime is a good idea. People can use the as_u64 method and a manual cast if they want this behavior, but it should not be the default. From what I can tell there are 2 options to go about the issue - either mark OffsetPageTable as a structure that depends on 64-bit pointer width, or making the addresses function on non-64-bit architectures. The OffsetPageTable type is already only available on x86_64 in the current implementation: https://github.com/rust-osdev/x86_64/blob/3ce339e4eabb9cafae6d0576ea89a10132d5c3e0/src/structures/paging/mapper/offset_page_table.rs#L1 So I think this is the way to go. OffsetPageTable is just a thin wrapper around MappedPageTable, which is available on all architectures. Users on non-x86_64 architectures can use MappedPageTable directly with a custom PhysToVirt implementation. Alright, I added a pointer width check. The macos testing seems to have failed due to lld missing (I think we had similar issue a few days back on our CI as well), but overall the code seems to be somewhat ready from my end. If there are any additional changes needed, please let me know! Thanks a lot, looks very good! I left two more inline comments above, maybe you could look into them? Otherwise I think this should be ready for merging. The rust-lld failure is caused by https://github.com/rust-lang/rust/issues/76698, I hope that it gets fixed soon. Alright, I added in the requested PageTable change, and clarified the decision made in the build script. I hope it is all fine! Looks good to me, thanks! I'll probably wait until the macOS nightly is fixed until merging, to ensure that everything works there as well. I hope that this will happen in the next few days. Published as v0.12.0.
gharchive/pull-request
2020-09-12T10:21:52
2025-04-01T06:45:42.174988
{ "authors": [ "h33p", "phil-opp" ], "repo": "rust-osdev/x86_64", "url": "https://github.com/rust-osdev/x86_64/pull/179", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1762698310
WeightedIndex error: invalid weight When trying to run Pythia model using gptneox, I got this error, btw I use termux on Android with rust installed to run this model. $ cargo run --release -- gptneox infer -m pythia-160m-q4_0.bin -p "Tell me how cool the Rust programming language is:" Finished release [optimized] target(s) in 2.18s Running target/release/llm gptneox infer -m pythia-160m-q4_0.bin -p 'Tell me how cool the Rust programming language is:' ✓ Loaded 148 tensors (92.2 MB) after 293ms <|padding|>Tell me how cool the Rust programming language is:The application panicked (crashed). Message: WeightedIndex error: InvalidWeight Location: crates/llm-base/src/samplers.rs:157 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ BACKTRACE ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ Run with COLORBT_SHOW_HIDDEN=1 environment variable to disable frame filtering. Run with RUST_BACKTRACE=full to include source snippets. Interesting, can you link the exact model you used, and the model of phone you have? I suspect this is more likely an issue with the execution on the phone (which will be a more complicated issue to diagnose), but we should rule out any issues with the model on a PC first. Interesting, can you link the exact model you used, and the model of phone you have? I suspect this is more likely an issue with the execution on the phone (which will be a more complicated issue to diagnose), but we should rule out any issues with the model on a PC first. Here is the link to model https://huggingface.co/rustformers/pythia-ggml/blob/main/pythia-160m-q4_0.bin FYI this model run ok on my PC. But I find bloom and llama run smoothly on my phone My device is : Poco m3 pro 5g with 4gb ram Ok, I've done some more testing - this model "works" (produces a lot of garbage) on x86-64 Windows, but doesn't work on macOS ARM64. I think this is an ARM64 issue, or at least it's more obviously broken on ARM64. We'll need to test with upstream GGML GPT-NeoX support to see if this is an issue with GGML or with our implementation. Yeah, I think so too. Maybe only some models can run on ARM64 architecture. llama.cpp (officially supported on Android according to the documentation), alpaca, or vicuna should work fine on Android. When I saw the availability of GPT-J in Rustformers, I became interested in performing inference with Rust on Android. Previously, llama.cpp only supported large models. I haven't tested the GPT-J family models yet because at that time they could only be run using the Transformers Python library, which requires Torch. Please note that Torch cannot be installed on Termux, and the same applies to NumPy. I got this error a few times while implementing Metal support (#311) and it happened there when a graph was not fully computed or otherwise misconfigures (leading to garbage output). This was also on arm64 (M1). So either something up with graph construction or some ARM64 specific race condition? I got this error a few times while implementing Metal support (#311) and it happened there when a graph was not fully computed or otherwise misconfigures (leading to garbage output). This was also on arm64 (M1). So either something up with graph construction or some ARM64 specific race condition? Edit: could also just be the context running out of memory, or will that always lead to an error? btw, what model do you use on arm64? What rustformers models are supported on arm besides llama and bloom?
gharchive/issue
2023-06-19T04:15:07
2025-04-01T06:45:42.252824
{ "authors": [ "andri-jpg", "philpax", "pixelspark" ], "repo": "rustformers/llm", "url": "https://github.com/rustformers/llm/issues/320", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2450170633
fix: wasm compilation errors Thanks for the awesome library! This PR aims to resolve compilation errors when targeting wasm32, which were caused by breaking changes to the RootCertStore interface that needed to be migrated in the conditionally compiled code. Additionally, I've added a couple Clippy build steps to the CI workflow to try to prevent the code targeting WASM from becoming stale over time as rustls continues to evolve. Looks like this will need enabling the js feature on getrandom to work? Maybe something like [target.'cfg(target_arch = "wasm32")'.dev-dependencies] getrandom = { version = "*", features = ["js"] } @ctz Doh! I probably should've run cargo clippy-ci locally! 😅 🤦 In the end, I just had to specify the wasm32_unknown_unknown_js feature for the ring dev-dependency when targeting wasm32_*. I took another look at this PR this evening. These are the minimum changes I made to get cargo clippy-ci --target wasm32-unknown-unknown passing on my system using the latest main (not counting lock file updates): diff --git a/rustls-platform-verifier/Cargo.toml b/rustls-platform-verifier/Cargo.toml index eb7c128..304108b 100644 --- a/rustls-platform-verifier/Cargo.toml +++ b/rustls-platform-verifier/Cargo.toml @@ -46,8 +46,13 @@ webpki = { package = "rustls-webpki", version = "0.102", default-features = fals android_logger = { version = "0.13", optional = true } # Only used during testing. [target.'cfg(target_arch = "wasm32")'.dependencies] +webpki = { package = "rustls-webpki", version = "0.102", default-features = false } +rustls-pki-types = { version = "1", features = ["web"] } webpki-roots = "0.26" +[target.'cfg(target_arch = "wasm32")'.dev-dependencies] +ring = { version = "0.17.7", features = ["wasm32_unknown_unknown_js"] } + # BSD targets require webpki-roots for the real-world verification tests. [target.'cfg(target_os = "freebsd")'.dev-dependencies] webpki-roots = "0.26" diff --git a/rustls-platform-verifier/src/verification/others.rs b/rustls-platform-verifier/src/verification/others.rs index 29dc19d..9cd4e8a 100644 --- a/rustls-platform-verifier/src/verification/others.rs +++ b/rustls-platform-verifier/src/verification/others.rs @@ -154,14 +154,8 @@ impl Verifier { #[cfg(target_arch = "wasm32")] { - root_store.add_trust_anchors(webpki_roots::TLS_SERVER_ROOTS.iter().map(|root| { - rustls::OwnedTrustAnchor::from_subject_spki_name_constraints( - root.subject, - root.spki, - root.name_constraints, - ) - })); - }; + root_store.extend(webpki_roots::TLS_SERVER_ROOTS.iter().cloned()); + } WebPkiServerVerifier::builder_with_provider( root_store.into(), One open question I have is if this library is the right place to set the web target on rustls-pki-types, or if that should be a flag in rustls itself. I ask this because usually library code should not enable js or web features in crate because it assumes wasm32-unknown-unknown is being used in the browser, which may not be the case. @cpu @djc do either of you have thoughts on this? I don't know if rustls has gotten any WASM questions before. One open question I have is if this library is the right place to set the web target on rustls-pki-types, or if that should be a flag in rustls itself. I ask this because usually library code should not enable js or web features in crate because it assumes wasm32-unknown-unknown is being used in the browser, which may not be the case. @cpu @djc do either of you have thoughts on this? I don't know if rustls has gotten any WASM questions before. Some discussion linked below, I feel like we had a more specific discussion about this issue but cannot find it now -- maybe on Discord or in the pki-types repo? https://github.com/rustls/rustls/issues/808 https://github.com/rustls/rustls/pull/1713 https://github.com/rustls/pki-types/pull/32 I feel like we had a more specific discussion about this issue but cannot find it now I think you're thinking of https://github.com/rustls/rustls/pull/1921, which did propose adding a rustls level web feature that itself activated the pki-types feature. I'm not sure we reached a conclusion there other than to suggest we needed a more holistic approach to WASM support that considered both WASI and browsers and assoc. test coverage. I don't think any crate in the middle of the dependency tree should set the web feature. Whether WASM is targetted at a browser or non-browser environment is something only known by the top-most crate. (I think it's actually a significant design error that this is not just part of the target triple.) I think it's actually a significant design error that this is not just part of the target triple We agree on that point! wasm32-unknown-unknown was a bad target to chose for Rust-on-the-web broadly. So since #136 has been merged, is there still anything from this PR that we should consider for merging? is there still anything from this PR that we should consider for merging? I don't think so, it seems to me that https://github.com/rustls/rustls-platform-verifier/pull/136 picked up everything we want. I think based on the above comments we're happy to test wasm32-wasip1 in CI in place of wasm32-unknown-unknown. Feel free to re-open if I've misjudged. Thanks for the PR adhen93! Apologies it sat for so long.
gharchive/pull-request
2024-08-06T07:36:52
2025-04-01T06:45:42.265525
{ "authors": [ "adenh93", "complexspaces", "cpu", "ctz", "djc" ], "repo": "rustls/rustls-platform-verifier", "url": "https://github.com/rustls/rustls-platform-verifier/pull/122", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2104102736
wasm32-unknown-unknown target not found 🐛 Bug description I've been building a wasm project using wasm-pack for a few weeks now. All of a sudden I'm hit with this error: Caused by: wasm32-unknown-unknown target not found in sysroot: "/usr" Used rustc from the following path: "/bin/rustc" It looks like Rustup is not being used. For non-Rustup setups, the wasm32-unknown-unknown target needs to be installed manually. See https://rustwasm.github.io/wasm-pack/book/prerequisites/non-rustup-setups.html on how to do this. I don't understand the error as I am using Rustup, running rustup target list also shows that the target is installed. I'm only creating this issue as I couldn't find any useful info on this error. I've already tried reinstalling Rust but the error still persists. 🤔 Expected Behavior Project should build normally. 👟 Steps to reproduce The only command I used was wasm-pack build 🌍 Your environment Include the relevant details of your environment. wasm-pack version: 0.12.1 rustc version: 1.73.0 (cc66ad468 2023-10-03) (Arch Linux rust 1:1.73.0-1) Removing the snap version for rust-up and re-installing it through apt, fixed the issue on my Ubuntu 22.04.3.
gharchive/issue
2024-01-28T10:58:17
2025-04-01T06:45:42.269808
{ "authors": [ "TheCuddleDoodle", "grilledwindow" ], "repo": "rustwasm/wasm-pack", "url": "https://github.com/rustwasm/wasm-pack/issues/1364", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2242902986
2.9.5 info box will not appear when tap on temperature chart Description: 2.9.5, S22 Ultra User taps to hide infobox on temperature and tries to get it back, it will not react even though chart contains datapoints in that area. If user taps on humidity chart infobox will appear and can be hidden. User had changed humidity unit to dew before checking. https://github.com/ruuvi/com.ruuvi.station/assets/50437378/44cdd329-c1a5-44ba-82fe-7a78fbef0f53 Trying to replicate, issue has suddenly disappeared. One clue is that infobox does not open/close during cloud sync Not seen anymore, moving to done
gharchive/issue
2024-04-15T07:19:58
2025-04-01T06:45:42.283966
{ "authors": [ "markoaamunkajo" ], "repo": "ruuvi/com.ruuvi.station", "url": "https://github.com/ruuvi/com.ruuvi.station/issues/1244", "license": "BSD-3-Clause", "license_type": "permissive", "license_source": "github-api" }
24169715
Open Error: IO error Invalid argument Hi, this is my first time using and learning about LevelDB with Node.js, and my simple app, can't run, this is my simple code: var level = require('level'), express = require('express'), uuid = require('node-uuid'), db = level('./users.db', {valueEncoding: 'json'}), app = module.exports = express(); app.use(express.logger('dev')); app.use(app.router); app.get('/users', function(req, res) { db.get('users', function(err, users) { if (err) return res.json(404, err); return res.json(200, users); }); }); app.post('/user', function(req, res) { db.put('users', req.body.user, function(err) { if (err) return res.json(404, err); return res.json(200); }); }); app.listen(5000, function() { console.log('LevelDB API running on port 5000'); }); And when I run: node app It throws the error: OpenError: IO error: ./users.db/MANIFEST-000001: Invalid argument I am using latest stable: Node.js 0.10.23 Does anybody can help me? Thanks! I had the same problem running bitcoind (which uses leveldb) in a mounted folder with boot2docker. Fixed it by mounting the folder with nfs instead of vboxfs and wrote a script to do the switch automatically: https://gist.github.com/olalonde/3f7512c0bd2bc8abb46d Great news everyone (/cc @groundwater), the latest version of docker (you can register for at https://beta.docker.com/) which does NOT require virtualbox (uses xhyve under the hood - with the native hypervisor that comes with OS X) - doesn't suffer from this mmap bug. You can happily create databases on your OS X host operating system, and volume map them to the xhyve docker host and it works beautifully! Woo! You can read more about the new docker here and get the full docs here It is a glorious day! Not sure if anyone is watching this anymore, but I am experiencing similar symptoms running Docker for Windows beta. I have an Ubuntu image that does some LevelDB stuff (not sure what exactly as its third party) and I get this error: events.js:141 throw er; // Unhandled 'error' event ^ OpenError: IO error: /data/farmdata/farmer.db: Invalid argument at /usr/lib/node_modules/storjshare-cli/node_modules/storj/node_modules/levelup/lib/levelup.js:119:34 at /usr/lib/node_modules/storjshare-cli/node_modules/storj/node_modules/leveldown/node_modules/abstract-leveldown/abstract-leveldown.js:39:16 In the Windows host I see farmdata/farmer.db folder in the folder I mounted to /data, and it even has some files in it (mostly log files, but also some other stuff like CURRENT, LOCK, LOG, MANIFEST-000001 (I'm not sure if these are LevelDB files or the app's). Windows for Docker uses Windows Hyper-V to run a Moby Linux VM which acts as the host OS for the docker client. The Windows volume is therefore mapped to the Moby Linux VM which I then mount with docker run -v //d/foo:/data. I don't run windows, but it could be that mmap() doesn't work correctly under the windows bash emulation. You can try compiling and running this file against a file on your file system to see if mmap works or not https://gist.github.com/0d991472c71409f5cdf0ed7aeb4e6222 I don't believe bash is being emulated in this scenario. I believe Moby Linux is being virtualized (by way of hypervisor) and then it is running a docker host which is in turn running a docker container which is running the code. That being said, I'll do a bit of googling to see if I can find any outstanding problems with mmap on docker for windows. There is a lot of stuff out there about VirtualBox and mmap, but nothing I can find for the new stuff. Also, I don't currently have a setup that would let me easily compile/run that gist in Windows. :/ I've wrapped everything into a docker image so you can test. Check out the details here Both tests passed: PS C:\Users\Micah> docker run --rm -it eugeneware/mmap hello world mmap works from internal filesystem - hoorah! PS C:\Users\Micah> docker run --rm -it -v //d/mmap/local.txt:/app/test.txt eugeneware/mmap test mmap file Hmm. Not sure what it could be. Most likely not an mmap() issue then which is the focus of this issue thread. I'd open up a new issue with some clear steps and code to replicate the issue.
gharchive/issue
2013-12-12T10:44:24
2025-04-01T06:45:42.295063
{ "authors": [ "Zoltu", "caio-ribeiro-pereira", "eugeneware", "olalonde" ], "repo": "rvagg/node-levelup", "url": "https://github.com/rvagg/node-levelup/issues/222", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2761886136
Additional chromeos key mappings (From my comment in #573) The documentation can be found here: https://support.google.com/chromebook/answer/183101?hl=en Text editing / Delete the next letter (forward delete): Alt + Backspace Page & web browser / Page up : Alt + Up arrow Page & web browser / Page down : Alt + Down arrow Page & web browser / Go to top of page: Ctrl + Alt + Up arrow Page & web browser / Go to bottom of page: Ctrl + Alt + Down arrow Thanks. Merged as 9132184. In the future, please prepend doc: to the commit message for consistency.
gharchive/pull-request
2024-12-28T19:43:23
2025-04-01T06:45:42.298002
{ "authors": [ "meeuw", "rvaiya" ], "repo": "rvaiya/keyd", "url": "https://github.com/rvaiya/keyd/pull/900", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
138364438
Possible regression for Erlang 17.x I've been seeing the following errors during compile tonight when using Erlang 17: $ make compile ... Compiled src/lfe_user_macros.erl src/lfe_env.erl:154: illegal use of variable 'N' in map src/lfe_env.erl:157: illegal use of variable 'N' in map src/lfe_env.erl:187: illegal use of variable 'N' in map src/lfe_env.erl:187: illegal use of variable 'N' in map src/lfe_env.erl:264: illegal use of variable 'N' in map src/lfe_env.erl:267: illegal use of variable 'N' in map Makefile:78: recipe for target 'compile' failed make: *** [compile] Error 1 The last commit that doesn't fail on Erlang 17 (for me) is db084eab6da293655a2dc3fb092fd5829c1ea3bd, indicating that the commit which introduced the regression was d19bf474b8b4a8f397581e0ba8c68f59dbb2eb40. There were some improvements in 17.3 or 5 I think. Which version are you running? I have many different versions of Erlang installed -- most managed with kerl. The version of 17 that I experienced this problem was erts 6.2 (17.2). I can't remember where I got it ... I'm running Ubuntu 15.04, but the version of Erlang that comes with that is 17.3, according to Launchpad ... It might have been 17.5 where they started evaluating map keys.. Related: https://github.com/rvirding/lfe/issues/136 Yes, sorry this is my fault. I don't make the necessary distinction of whether you are running Erlang 17 or 18. I will fix that today or tomorrow and commit a fix in develop. Officially evaluating map keys came in 18, which is at least what I assume.
gharchive/issue
2016-03-04T03:17:31
2025-04-01T06:45:42.309950
{ "authors": [ "oubiwann", "rvirding", "yurrriq" ], "repo": "rvirding/lfe", "url": "https://github.com/rvirding/lfe/issues/191", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }